Psanet: prototype-guided salient attention for few-shot segmentation

Hao Li, Guoheng Huang, Xiaochen Yuan, Zewen Zheng, Xuhang Chen, Guo Zhong, Chi Man Pun

研究成果: Article同行評審

摘要

Few-shot semantic segmentation aims to learn a generalized model for unseen-class segmentation with just a few densely annotated samples. Most current metric-based prototype learning models utilize prototypes to assist in query sample segmentation by directly utilizing support samples through Masked Average Pooling. However, these methods frequently fail to consider the semantic ambiguity of prototypes, the limitations in performance when dealing with extreme variations in objects, and the semantic similarities between different classes. In this paper, we introduce a novel network architecture named Prototype-guided Salient Attention Network (PSANet). Specifically, we employ prototype-guided attention to learn salient regions, allocating different attention weights to features at different spatial locations of the target to enhance the significance of salient regions within the prototype. In order to mitigate the impact of external distractor categories on the prototype, our proposed contrastive loss has the capability to acquire a more discriminative prototype to promote inter-class feature separation and intra-class feature compactness. Moreover, we suggest implementing a refinement operation for the multi-scale module in order to enhance the ability to capture complete contextual information regarding features at various scales. The effectiveness of our strategy is demonstrated by extensive tests performed on the PASCAL-5i and COCO-20i datasets, despite its inherent simplicity. Our code is available at https://github.com/woaixuexixuexi/PSANet.

原文English
期刊Visual Computer
DOIs
出版狀態Accepted/In press - 2024

指紋

深入研究「Psanet: prototype-guided salient attention for few-shot segmentation」主題。共同形成了獨特的指紋。

引用此