Pseudo Label-Guided Data Fusion and output consistency for semi-supervised medical image segmentation

Tao Wang, Xinlin Zhang, Yuanbin Chen, Yuanbo Zhou, Longxuan Zhao, Bizhe Bai, Tao Tan, Tong Tong

Research output: Contribution to journalArticlepeer-review

Abstract

Supervised learning algorithms have become the benchmark for medical image segmentation tasks, but their effectiveness heavily relies on a large amount of labeled data which is a laborious and time-consuming process. Consequently, semi-supervised learning methods are increasingly becoming popular. We propose the Pseudo Label-Guided Data Fusion framework, which builds upon the mean teacher network for segmenting medical images with limited annotation. We introduce a pseudo-labeling utilization scheme, which combines labeled and unlabeled data to augment the dataset effectively. Additionally, we enforce the consistency between different scales in the decoder module of the segmentation network and propose a loss function suitable for evaluating the consistency. Moreover, we incorporate a sharpening operation on the predicted results, further enhancing the accuracy of the segmentation. Extensive experiments on the Pancreas-CT, LA, BraTS2019 and BraTS2023 datasets demonstrate superior performance, with Dice scores of 80.90%, 89.80%, 85.47% and 89.39% respectively, when 10% of the dataset is labeled. Compared to MC-Net, our method achieves improvements of 10.9%, 0.84%, 5.84% and 0.63% on these datasets, respectively. The codes for this study are available at https://github.com/ortonwang/PLGDF.

Original languageEnglish
Article number107956
JournalBiomedical Signal Processing and Control
Volume108
DOIs
Publication statusPublished - Oct 2025

Keywords

  • Machine learning
  • Medical image segmentation
  • Pseudo Label
  • Semi-supervised learning

Fingerprint

Dive into the research topics of 'Pseudo Label-Guided Data Fusion and output consistency for semi-supervised medical image segmentation'. Together they form a unique fingerprint.

Cite this