A multimodal vision–language model for generalizable annotation-free pathology localization

  • Hao Yang
  • , Hong Yu Zhou
  • , Jiarun Liu
  • , Weijian Huang
  • , Cheng Li
  • , Zhihuan Li
  • , Yuanxu Gao
  • , Qiegen Liu
  • , Yong Liang
  • , Qi Yang
  • , Song Wu
  • , Tao Tan
  • , Hairong Zheng
  • , Kang Zhang
  • , Shanshan Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Existing deep learning models for defining pathology from clinical imaging data rely on expert annotations and lack generalization capabilities in open clinical environments. Here we present a generalizable vision–language model for Annotation-Free pathology Localization (AFLoc). The core strength of AFLoc is extensive multilevel semantic structure-based contrastive learning, which comprehensively aligns multigranularity medical concepts with abundant image features to adapt to the diverse expressions of pathologies without the reliance on expert image annotations. We conducted primary experiments on a dataset of 220,000 pairs of image–report chest X-ray images and performed validation across 8 external datasets encompassing 34 types of chest pathology. The results demonstrate that AFLoc outperforms state-of-the-art methods in both annotation-free localization and classification tasks. In addition, we assessed the generalizability of AFLoc on other modalities, including histopathology and retinal fundus images. We show that AFLoc exhibits robust generalization capabilities, even surpassing human benchmarks in localizing five different types of pathological image. These results highlight the potential of AFLoc in reducing annotation requirements and its applicability in complex clinical environments.

Original languageEnglish
JournalNature Biomedical Engineering
DOIs
Publication statusAccepted/In press - 2026

Fingerprint

Dive into the research topics of 'A multimodal vision–language model for generalizable annotation-free pathology localization'. Together they form a unique fingerprint.

Cite this