TY - JOUR
T1 - Language-Driven Visual Consensus for Zero-Shot Semantic Segmentation
AU - Zhang, Zicheng
AU - Ke, Wei
AU - Zhu, Yi
AU - Liang, Xiaodan
AU - Liu, Jianzhuang
AU - Ye, Qixiang
AU - Zhang, Tong
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - The pre-trained vision-language model, exemplified by CLIP [1], advances zero-shot semantic segmentation by aligning visual features with class embeddings through a transformer decoder to generate semantic masks. Despite its effectiveness, prevailing methods within this paradigm encounter challenges, including overfitting on seen classes and small fragmentation in segmentation masks. To mitigate these issues, we propose a Language-Driven Visual Consensus (LDVC) approach, fostering improved alignment of linguistic and visual information. Specifically, we leverage class embeddings as anchors due to their discrete and abstract nature, steering visual features toward class embeddings. Moreover, to achieve a more compact visual space, we introduce route attention into the transformer decoder to find visual consensus, thereby enhancing semantic consistency within the same object. Equipped with a vision-language prompting strategy, our approach significantly boosts the generalization capacity of segmentation models for unseen classes. Experimental results underscore the effectiveness of our approach, showcasing mIoU gains of 4.5% on the PASCAL VOC 2012 and 3.6% on the COCO-Stuff 164K for unseen classes compared with the state-of- the-art methods.
AB - The pre-trained vision-language model, exemplified by CLIP [1], advances zero-shot semantic segmentation by aligning visual features with class embeddings through a transformer decoder to generate semantic masks. Despite its effectiveness, prevailing methods within this paradigm encounter challenges, including overfitting on seen classes and small fragmentation in segmentation masks. To mitigate these issues, we propose a Language-Driven Visual Consensus (LDVC) approach, fostering improved alignment of linguistic and visual information. Specifically, we leverage class embeddings as anchors due to their discrete and abstract nature, steering visual features toward class embeddings. Moreover, to achieve a more compact visual space, we introduce route attention into the transformer decoder to find visual consensus, thereby enhancing semantic consistency within the same object. Equipped with a vision-language prompting strategy, our approach significantly boosts the generalization capacity of segmentation models for unseen classes. Experimental results underscore the effectiveness of our approach, showcasing mIoU gains of 4.5% on the PASCAL VOC 2012 and 3.6% on the COCO-Stuff 164K for unseen classes compared with the state-of- the-art methods.
KW - semantic segmentation
KW - Vision-language model
KW - vision-language prompt tuning
KW - zero-shot
UR - http://www.scopus.com/inward/record.url?scp=85210008446&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2024.3504816
DO - 10.1109/TCSVT.2024.3504816
M3 - Article
AN - SCOPUS:85210008446
SN - 1051-8215
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
ER -