Pose Attention-Guided Paired-Images Generation for Visible-Infrared Person Re-Identification

Yongheng Qian, Su Kit Tang

研究成果: Article同行評審

6 引文 斯高帕斯(Scopus)

摘要

A key challenge of visible-infrared person re-identification (VI-ReID) comes from the modality difference between visible and infrared images, which further causes large intra-person and small inter-person distances. Most existing methods design feature extractors and loss functions to bridge the modality gap. However, the unpaired-images constrain the VI-ReID model's ability to learn instance-level alignment features. Different from these methods, in this paper, we propose a pose attention-guided paired-images generation network (PAPG) from the standpoint of data augmentation. PAPG can generate cross-modality paired-images with shape and appearance consistency with the real image to perform instance-level feature alignment by minimizing the distances of every pair of images. Furthermore, our method alleviates data insufficient and reduces the risk of VI-ReID model overfitting. Comprehensive experiments conducted on two publicly available datasets validate the effectiveness and generalizability of PAPG. Especially, on the SYSU-MM01 dataset, our method accomplishes 7.76% and 5.87% gains in Rank-1 and mAP.

原文English
頁(從 - 到)346-350
頁數5
期刊IEEE Signal Processing Letters
31
DOIs
出版狀態Published - 2024

指紋

深入研究「Pose Attention-Guided Paired-Images Generation for Visible-Infrared Person Re-Identification」主題。共同形成了獨特的指紋。

引用此