Learning irregular space transformation for person re-identification

Yanwei Zheng, Hao Sheng, Yang Liu, Kai Lv, Wei Ke, Zhang Xiong

研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)

摘要

Person re-identification (ReID) classifies the discriminative features of different people. Human perception usually depends on the minority of discriminative colors to classify targets, rather than the majority of mutual colors. ReID uses a small number of fixed cameras, which create a small account of similar backgrounds, leading to the majority of background pixels becoming non-discriminative (this is expanded in the feature map). This paper analyzes the distributions of feature maps to discover their different discriminative power. It also collects statistics that classify feature map values into individual ones and general ones according to the deviation of the mean value on each mini-batch. Finally, our findings introduce a learning irregular space transformation model in convolutional neural networks by enlarging the individual variance while reducing the general one to enhance the discrimination of features. We demonstrate our theories as valid on various public data sets, and achieve competitive results via quantitative evaluation.

原文English
文章編號8468189
頁(從 - 到)53214-53225
頁數12
期刊IEEE Access
6
DOIs
出版狀態Published - 2018

指紋

深入研究「Learning irregular space transformation for person re-identification」主題。共同形成了獨特的指紋。

引用此