跳至主導覽 跳至搜尋 跳過主要內容

FastFace: Fast-Converging Scheduler for Large-Scale Face Recognition Training With One GPU

  • Xueyuan Gong
  • , Zhiquan Liu
  • , Yain Whar Si
  • , Xiaochen Yuan
  • , Ke Wang
  • , Xiaoxiang Liu
  • , Cong Lin
  • , Xinyuan Zhang
  • Jinan University
  • University of Macau

研究成果: Article同行評審

2 引文 斯高帕斯(Scopus)

摘要

Computing power has evolved into a foundational and indispensable resource in the area of deep learning, particularly in tasks such as Face Recognition (FR) model training on large-scale datasets, where multiple GPUs are often a necessity. Recognizing this challenge, some FR methods have started exploring ways to compress the fully-connected layer in FR models. Unlike other approaches, our observations reveal that without prompt scheduling of the learning rate (LR) during FR model training, the loss curve tends to exhibit numerous stationary subsequences. To address this issue, we introduce a novel LR scheduler leveraging Exponential Moving Average (EMA) and Haar Convolutional Kernel (HCK) to eliminate stationary subsequences, resulting in a significant reduction in converging time. However, the proposed scheduler incurs a considerable computational overhead due to its time complexity. To overcome this limitation, we propose FastFace, a fast-converging scheduler with negligible time complexity, i.e. O(1) per iteration, during training. In practice, FastFace is able to accelerate FR model training to a quarter of its original time without sacrificing more than 1% accuracy, making large-scale FR training feasible even with just one single GPU in terms of both time and space complexity. Extensive experiments validate the efficiency and effectiveness of FastFace.

原文English
頁(從 - 到)11271-11281
頁數11
期刊IEEE Transactions on Circuits and Systems for Video Technology
35
發行號11
DOIs
出版狀態Published - 2025

指紋

深入研究「FastFace: Fast-Converging Scheduler for Large-Scale Face Recognition Training With One GPU」主題。共同形成了獨特的指紋。

引用此