摘要
Domain generalization poses significant challenges, particularly as models must generalize effectively to unseen target domains after training on multiple source domains. Traditional approaches typically aim to minimize domain discrepancies; however, they often fall short when handling complex data variations and class imbalance. In this paper, we propose an innovative model, the self-supervised learning multi-classifier ensemble (SSL-MCE), to address these limitations. SSL-MCE integrates self-supervised learning within a dynamic multi-classifier ensemble framework, leveraging ResNet as a shared feature extraction backbone. By combining four distinct classifiers, it captures diverse and complementary features, thereby enhancing adaptability to new domains. A self-supervised rotation prediction task enables SSL-MCE to focus on intrinsic data structures rather than domain-specific details, learning robust domain-invariant features. To mitigate class imbalance, we incorporate adaptive focal attention loss (AFAL), which dynamically emphasizes challenging and rare instances, ensuring improved accuracy on difficult samples. Furthermore, SSL-MCE adopts a dynamic loss-based weighting scheme to prioritize more reliable classifiers in the final prediction. Extensive experiments conducted on public benchmark datasets, including PACS and DomainNet, indicate that SSL-MCE outperforms state-of-the-art methods, achieving superior generalization and resource efficiency through its streamlined ensemble framework.
| 原文 | English |
|---|---|
| 文章編號 | e70098 |
| 期刊 | IET Image Processing |
| 卷 | 19 |
| 發行號 | 1 |
| DOIs | |
| 出版狀態 | Published - 1 1月 2025 |
UN SDG
此研究成果有助於以下永續發展目標
-
Decent work and economic growth
-
Responsible consumption and production
指紋
深入研究「Self-Supervised Learning for Domain Generalization With a Multi-Classifier Ensemble Approach」主題。共同形成了獨特的指紋。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver