Person re-identification (ReID) classifies the discriminative features of different people. Human perception usually depends on the minority of discriminative colors to classify targets, rather than the majority of mutual colors. ReID uses a small number of fixed cameras, which create a small account of similar backgrounds, leading to the majority of background pixels becoming non-discriminative (this is expanded in the feature map). This paper analyzes the distributions of feature maps to discover their different discriminative power. It also collects statistics that classify feature map values into individual ones and general ones according to the deviation of the mean value on each mini-batch. Finally, our findings introduce a learning irregular space transformation model in convolutional neural networks by enlarging the individual variance while reducing the general one to enhance the discrimination of features. We demonstrate our theories as valid on various public data sets, and achieve competitive results via quantitative evaluation.
- Irregular space transformation
- convolutional neural networks
- discriminative power enhancement
- person re-identification