跳至主導覽 跳至搜尋 跳過主要內容

Multi-Modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment

  • Yuan Gao
  • , Tao Tan
  • , Xin Wang
  • , Regina Beets-Tan
  • , Tianyu Zhang
  • , Luyi Han
  • , Antonio Portaluri
  • , Chunyao Lu
  • , Xinglong Liang
  • , Jonas Teuwen
  • , Hong Yu Zhou
  • , Ritse Mann
  • Netherlands Cancer Institute
  • Maastricht University
  • Radboud University Nijmegen
  • University of Messina
  • Harvard University

研究成果: Article同行評審

8 引文 斯高帕斯(Scopus)

摘要

Longitudinal medical imaging is crucial for monitoring neoadjuvant therapy (NAT) response in clinical practice. However, mainstream artificial intelligence (AI) methods for disease monitoring commonly rely on extensive segmentation labels to evaluate lesion progression. While self-supervised vision-language (VL) learning efficiently captures medical knowledge from radiology reports, existing methods focus on single time points, missing opportunities to leverage temporal self-supervision for disease progression tracking. In addition, extracting dynamic progression from longitudinal unannotated images with corresponding textual data poses challenges. In this work, we explicitly account for longitudinal NAT examinations and accompanying reports, encompassing scans before NAT and follow-up scans during mid-/post-NAT. We introduce the multi-modal longitudinal representation learning pipeline (MLRL), a temporal foundation model, that employs multi-scale self-supervision scheme, including single-time scale vision-text alignment (VTA) learning and multi-time scale visual/textual progress (TVP/TTP) learning to extract temporal representations from each modality, thereby facilitates the downstream evaluation of tumor progress. Our method is evaluated against several state-of-the-art self-supervised longitudinal learning and multi-modal VL methods. Results from internal and external datasets demonstrate that our approach not only enhances label efficiency across the zero-, few- and full-shot regime experiments but also significantly improves tumor response prediction in diverse treatment scenarios. Furthermore, MLRL enables interpretable visual tracking of progressive areas in temporal examinations, offering insights into longitudinal VL foundation tools and potentially facilitating the temporal clinical decision-making process.

原文English
頁(從 - 到)9041-9050
頁數10
期刊IEEE Journal of Biomedical and Health Informatics
29
發行號12
DOIs
出版狀態Published - 2025

UN SDG

此研究成果有助於以下永續發展目標

  1. Good health and well being
    Good health and well being

指紋

深入研究「Multi-Modal Longitudinal Representation Learning for Predicting Neoadjuvant Therapy Response in Breast Cancer Treatment」主題。共同形成了獨特的指紋。

引用此