跳至主導覽 跳至搜尋 跳過主要內容

Self-Supervised Learning for Domain Generalization With a Multi-Classifier Ensemble Approach

研究成果: Article同行評審

摘要

Domain generalization poses significant challenges, particularly as models must generalize effectively to unseen target domains after training on multiple source domains. Traditional approaches typically aim to minimize domain discrepancies; however, they often fall short when handling complex data variations and class imbalance. In this paper, we propose an innovative model, the self-supervised learning multi-classifier ensemble (SSL-MCE), to address these limitations. SSL-MCE integrates self-supervised learning within a dynamic multi-classifier ensemble framework, leveraging ResNet as a shared feature extraction backbone. By combining four distinct classifiers, it captures diverse and complementary features, thereby enhancing adaptability to new domains. A self-supervised rotation prediction task enables SSL-MCE to focus on intrinsic data structures rather than domain-specific details, learning robust domain-invariant features. To mitigate class imbalance, we incorporate adaptive focal attention loss (AFAL), which dynamically emphasizes challenging and rare instances, ensuring improved accuracy on difficult samples. Furthermore, SSL-MCE adopts a dynamic loss-based weighting scheme to prioritize more reliable classifiers in the final prediction. Extensive experiments conducted on public benchmark datasets, including PACS and DomainNet, indicate that SSL-MCE outperforms state-of-the-art methods, achieving superior generalization and resource efficiency through its streamlined ensemble framework.

原文English
文章編號e70098
期刊IET Image Processing
19
發行號1
DOIs
出版狀態Published - 1 1月 2025

UN SDG

此研究成果有助於以下永續發展目標

  1. Decent work and economic growth
    Decent work and economic growth
  2. Responsible consumption and production
    Responsible consumption and production

指紋

深入研究「Self-Supervised Learning for Domain Generalization With a Multi-Classifier Ensemble Approach」主題。共同形成了獨特的指紋。

引用此