Abstract
Purpose: To develop a foundational pretraining method for digital mammography that extracts fine-grained visual–language representations from images and reports in label-limited settings. Materials and Methods: A multiview mammogram–report pretraining framework for automated breast cancer analysis was developed using retrospectively collected data from January 2010 to December 2020. This framework provides visual explanations of the model’s learning, allowing researchers to “visualize what you learn.” The abnormality-aware technique was tailored to mammogram characteristics of dense fibroglandular tissue. The proposed framework was evaluated on downstream tasks from four external medical centers, involving label-efficient abnormality recognition in mammograms, including malignancy classification, segmentation, and localization. Statistical analyses were performed using the DeLong test and paired t test for area under the receiver operating characteristic curve and Dice scores, respectively. Results: The visualization results, including abnormality-enhanced mammograms and abnormality-awareness maps, could explain that the developed model successfully captures relationships between multiview mammograms and corresponding reports. This reduces the false positives for breast cancer by 37% and enables zero-shot abnormality segmentation. Furthermore, the developed model consistently outperformed existing approaches in fine-tuning for both malignancy classification (area under the receiver operating characteristic curve, INbreast: 0.90 vs 0.78 [P < .001]; Curated Breast Imaging Subset of Digital Database for Screening Mammography [CBIS-DDSM]: 0.85 vs 0.79 [P < .01]; Chinese Mammography Database: 0.85 vs 0.78 [P < .001]; and Cohort of Screen-age Women-Case Control: 0.86 vs 0.77 [P < .001]) and segmentation and localization (Dice score, INbreast: 0.75 vs 0.63 [P < .001]; CBIS-DDSM: 0.76 vs 0.61 [P < .001]). Conclusion: reports. The proposed framework enhances interpretability and fine-grained multimodal foundational learning for multiview mammograms and.
| Original language | English |
|---|---|
| Article number | e240646 |
| Journal | Radiology: Artificial Intelligence |
| Volume | 8 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - Jan 2026 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- Breast
- Breast Cancer
- Diagnosis
- Explainable AI
- Feature Detection
- Mammography
- Quantification
- Representation Learning
- Segmentation
- Transfer Learning
- Translation
- Unsupervised Learning
- Visual-Language Foundation Model
Fingerprint
Dive into the research topics of 'Visualizing Radiologic Connections: An Explainable Coarse-to-Fine Foundation Model with Multiview Mammograms and Associated Reports'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver