跳至主導覽 跳至搜尋 跳過主要內容

ImageNet pre-training and two-step transfer learning in chromosome image classification

  • Tianhao Chen
  • , Can Xie
  • , Wenhua Zhang
  • , Yufei Li
  • , Wei Ke
  • , Tian Li
  • , Xiujing Huang
  • , Kefeng Li

研究成果: Article同行評審

摘要

Chromosome image classification typically relies on ImageNet pre-training, yet the potential of leveraging intermediate domains from related staining techniques remains largely underexplored. Here, we evaluate two-step transfer learning–where classifiers are first fine-tuned on an intermediate domain before targeting the final classification task–across Q-band (BioImLab dataset) and G-band (CIR dataset) chromosome classification. Each dataset serves as intermediate domain for the other. Across 11 architecture families and three training approaches, models achieve improvements when domain similarity is high and data quality is limited: modern architectures (ConvNeXt, Swin Transformer, ViT, MobileNetV3) show + 0.8 to + 3.3 percentage point gains in Macro-F1 on Q-band classification, while traditional CNNs benefit less or show no improvement. On the higher-quality G-band dataset, all architectures approach performance saturation, with minimal gains from two-step transfer (+ 0.1 to + 0.7 percentage points). Consistent results across both transfer directions demonstrate that, with appropriate architecture selection and intermediate domain similarity, two-step transfer learning can boost performance when target datasets are challenging, while ImageNet pre-training alone suffices for high-quality data. The code is publicly available at https://github.com/MuscleOne/chromosome_TL.

原文English
文章編號7572
期刊Scientific Reports
16
發行號1
DOIs
出版狀態Published - 12月 2026

指紋

深入研究「ImageNet pre-training and two-step transfer learning in chromosome image classification」主題。共同形成了獨特的指紋。

引用此