TY - JOUR
T1 - Local perspective based synthesis for vehicle re-identification
T2 - A transformation state adversarial method
AU - Chen, Yanbing
AU - Ke, Wei
AU - Lin, Hong
AU - Lam, Chan Tong
AU - Lv, Kai
AU - Sheng, Hao
AU - Xiong, Zhang
N1 - Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2022/2
Y1 - 2022/2
N2 - Vehicle re-identification (V-ReID) aims at discovering an image of a specific vehicle from a set of images typically captured by different cameras. Vehicles are one of the most important objects in cross-camera target recognition systems, and recognizing them is one of the most difficult tasks due to the subtle differences in the visible characteristics of vehicle rigid objects. Compared to various methods that can improve re-identification accuracy, data augmentation is a more straightforward and effective technique. In this paper, we propose a novel data synthesis method for V-ReID based on local-region perspective transformation, transformation state adversarial learning and a candidate pool. Specifically, we first propose a parameter generator network, which is a lightweight convolutional neural network, to generate the transformation states. Secondly, an adversarial module is designed in our work, it ensures that noise information is added as much as possible while keeping the labeling and structure of the dataset intact. With this adversarial module, we are able to promote the performance of the network and generate more proper and harder training samples. Furthermore, we use a candidate pool to store harder samples for further selection to improve the performance of the model. Our system pays more balanced attention to the features of vehicles. Extensive experiments show that our method significantly boosts the performance of V-ReID on the VeRi-776, VehicleID and VERI-Wild datasets.
AB - Vehicle re-identification (V-ReID) aims at discovering an image of a specific vehicle from a set of images typically captured by different cameras. Vehicles are one of the most important objects in cross-camera target recognition systems, and recognizing them is one of the most difficult tasks due to the subtle differences in the visible characteristics of vehicle rigid objects. Compared to various methods that can improve re-identification accuracy, data augmentation is a more straightforward and effective technique. In this paper, we propose a novel data synthesis method for V-ReID based on local-region perspective transformation, transformation state adversarial learning and a candidate pool. Specifically, we first propose a parameter generator network, which is a lightweight convolutional neural network, to generate the transformation states. Secondly, an adversarial module is designed in our work, it ensures that noise information is added as much as possible while keeping the labeling and structure of the dataset intact. With this adversarial module, we are able to promote the performance of the network and generate more proper and harder training samples. Furthermore, we use a candidate pool to store harder samples for further selection to improve the performance of the model. Our system pays more balanced attention to the features of vehicles. Extensive experiments show that our method significantly boosts the performance of V-ReID on the VeRi-776, VehicleID and VERI-Wild datasets.
KW - Candidate pool
KW - Data synthesis
KW - Local-region perspective transformation
KW - Parameter generator network
KW - Transformation state adversarial
KW - Vehicle re-identification
UR - http://www.scopus.com/inward/record.url?scp=85122999421&partnerID=8YFLogxK
U2 - 10.1016/j.jvcir.2021.103432
DO - 10.1016/j.jvcir.2021.103432
M3 - Article
AN - SCOPUS:85122999421
SN - 1047-3203
VL - 83
JO - Journal of Visual Communication and Image Representation
JF - Journal of Visual Communication and Image Representation
M1 - 103432
ER -