TY - JOUR
T1 - Quality adaptive class center for lightweight large-scale face recognition
AU - Zheng, Zhuowen
AU - Liu, Zhiquan
AU - Si, Yain Whar
AU - Yuan, Xiaochen
AU - Duan, Junwei
AU - Li, Xiaofan
AU - Zhang, Xinyuan
AU - Gong, Xueyuan
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2026/4/15
Y1 - 2026/4/15
N2 - In recent years, the advancement in deep neural networks and the availability of large-scale datasets have significantly improved the performance of face recognition (FR) models. However, since the number of class centers in the fully-connected (FC) layer is directly linked to the number of identities present in the dataset, training the FR model on large-scale datasets often results in substantial model parameters. Previous methods have attempted to reduce the number of parameters by generating class centers from images. However, these methods often overlook the influence of various low-quality images in large-scale datasets, which can negatively affect the generative class centers. This paper proposes the attention fully-connected (AttFC) layer, which significantly reduces the number of parameters needed for training the FR model on large-scale datasets. It incorporates an attention loader to adjust the weight of images based on their quality when generating class centers. Comprehensive experiments demonstrate that AttFC achieves performance comparable to state-of-the-art (SOTA) methods while significantly decreasing the number of model parameters. Especially when using the same number of class centers, AttFC improves the average accuracy by over 1 % compared to most other methods employed for large-scale FR. Furthermore, training face recognition models on large-scale datasets, such as WebFace21M, can lead to out-of-memory issues, but AttFC can help mitigate this issue.
AB - In recent years, the advancement in deep neural networks and the availability of large-scale datasets have significantly improved the performance of face recognition (FR) models. However, since the number of class centers in the fully-connected (FC) layer is directly linked to the number of identities present in the dataset, training the FR model on large-scale datasets often results in substantial model parameters. Previous methods have attempted to reduce the number of parameters by generating class centers from images. However, these methods often overlook the influence of various low-quality images in large-scale datasets, which can negatively affect the generative class centers. This paper proposes the attention fully-connected (AttFC) layer, which significantly reduces the number of parameters needed for training the FR model on large-scale datasets. It incorporates an attention loader to adjust the weight of images based on their quality when generating class centers. Comprehensive experiments demonstrate that AttFC achieves performance comparable to state-of-the-art (SOTA) methods while significantly decreasing the number of model parameters. Especially when using the same number of class centers, AttFC improves the average accuracy by over 1 % compared to most other methods employed for large-scale FR. Furthermore, training face recognition models on large-scale datasets, such as WebFace21M, can lead to out-of-memory issues, but AttFC can help mitigate this issue.
KW - Face recognition
KW - Image quality
KW - Large-scale face datasets
UR - https://www.scopus.com/pages/publications/105024318936
U2 - 10.1016/j.ins.2025.122944
DO - 10.1016/j.ins.2025.122944
M3 - Article
AN - SCOPUS:105024318936
SN - 0020-0255
VL - 732
JO - Information Sciences
JF - Information Sciences
M1 - 122944
ER -