Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment

Yuan Gao, Tao Tan, Xin Wang, Regina Beets-Tan, Tianyu Zhang, Luyi Han, Antonio Portaluri, Chunyao Lu, Xinglong Liang, Jonas Teuwen, Hong Yu Zhou, Ritse Mann

Research output: Contribution to journalArticlepeer-review

Abstract

Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.

Original languageEnglish
JournalIEEE Journal of Biomedical and Health Informatics
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • breast cancer
  • longitudinal medical imaging
  • neoadjuvant therapy response prediction
  • self-supervised learning
  • temporal foundation model
  • vision-language representation learning

Fingerprint

Dive into the research topics of 'Multi-modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment'. Together they form a unique fingerprint.

Cite this