TY - JOUR
T1 - Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment
AU - Gao, Yuan
AU - Tan, Tao
AU - Wang, Xin
AU - Beets-Tan, Regina
AU - Zhang, Tianyu
AU - Han, Luyi
AU - Portaluri, Antonio
AU - Lu, Chunyao
AU - Liang, Xinglong
AU - Teuwen, Jonas
AU - Zhou, Hong Yu
AU - Mann, Ritse
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.
AB - Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.
KW - breast cancer
KW - longitudinal medical imaging
KW - neoadjuvant therapy response prediction
KW - self-supervised learning
KW - temporal foundation model
KW - vision-language representation learning
UR - http://www.scopus.com/inward/record.url?scp=85217906024&partnerID=8YFLogxK
U2 - 10.1109/JBHI.2025.3540574
DO - 10.1109/JBHI.2025.3540574
M3 - Article
AN - SCOPUS:85217906024
SN - 2168-2194
JO - IEEE Journal of Biomedical and Health Informatics
JF - IEEE Journal of Biomedical and Health Informatics
ER -