跳至主導覽 跳至搜尋 跳過主要內容

All-in-one medical image-to-image translation

  • Luyi Han
  • , Tao Tan
  • , Yunzhi Huang
  • , Haoran Dou
  • , Tianyu Zhang
  • , Yuan Gao
  • , Xin Wang
  • , Chunyao Lu
  • , Xinglong Liang
  • , Yue Sun
  • , Jonas Teuwen
  • , S. Kevin Zhou
  • , Ritse Mann
  • Radboud University Nijmegen
  • Netherlands Cancer Institute
  • Nanjing University of Information Science & Technology
  • University of Leeds
  • Maastricht University
  • Macao Polytechnic University
  • University of Science and Technology of China
  • CAS - Institute of Computing Technology

研究成果: Article同行評審

3 引文 斯高帕斯(Scopus)

摘要

The growing availability of public multi-domain medical image datasets enables training omnipotent image-to-image (I2I) translation models. However, integrating diverse protocols poses challenges in domain encoding and scalability. Therefore, we propose the “every domain all at once” I2I (EVA-I2I) translation model using DICOM-tag-informed contrastive language-image pre-training (DCLIP). DCLIP maps natural language scan descriptions into a common latent space, offering richer representations than traditional one-hot encoding. We develop the model using seven public datasets with 27,950 scans (3D volumes) for the brain, breast, abdomen, and pelvis. Experimental results show that our EVA-I2I can synthesize every seen domain at once with a single training session and achieve excellent image quality on different I2I translation tasks. Results for downstream applications (e.g., registration, classification, and segmentation) demonstrate that EVA-I2I can be directly applied to domain adaptation on external datasets without fine-tuning and that it also enables the potential for zero-shot domain adaptation for never-before-seen domains.

原文English
文章編號101138
期刊Cell Reports Methods
5
發行號8
DOIs
出版狀態Published - 18 8月 2025

指紋

深入研究「All-in-one medical image-to-image translation」主題。共同形成了獨特的指紋。

引用此