Collaborative boundary-aware context encoding networks for error map prediction

Zhenxi Zhang, Chunna Tian, Xinbo Gao, Jie Li, Zhicheng Jiao, Cui Wang, Zhusi Zhong

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


Accurately assessing the medical image segmentation quality of the automatically generated predictions is essential for guaranteeing the reliability of the results of computer-assisted diagnosis (CAD). Many researchers have studied segmentation quality estimation without labeled ground truths. Recently, a novel idea is proposed, which transforms segmentation quality assessment (SQA) into the pixel-wise or voxel-wise error map segmentation task. However, the simple application of vanilla segmentation structures in medical domain fails to achieve satisfactory error segmentation results. In this paper, we propose collaborative boundary-aware context encoding networks called EP-Net for error segmentation task. Specifically, we propose a collaborative feature transformation branch for better feature fusion between images and masks, and precise localization of error regions. Further, we propose a context encoding module to utilize the global predictor from the error map to enhance the feature representation and regularize the networks. Extensive experiments on IBSR V2.0 dataset, ACDC dataset and M&Ms dataset demonstrate that EP-Net achieves better error segmentation results compared with the traditional segmentation patterns. Based on error prediction results, we obtain a proxy metric of segmentation quality, which has high Pearson correlation coefficient with the real segmentation accuracy on all datasets.

Original languageEnglish
Article number108515
JournalPattern Recognition
Publication statusPublished - May 2022
Externally publishedYes


  • Error map prediction
  • Medical image segmentation
  • Segmentation quality assessment


Dive into the research topics of 'Collaborative boundary-aware context encoding networks for error map prediction'. Together they form a unique fingerprint.

Cite this