To raise or not to raise: the autonomous learning rate question

Xiaomeng Dong, Tao Tan, Michael Potter, Yun Chan Tsai, Gaurav Kumar, V.  Ratna Saripalli, Theodore Trafalis

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

There is a parameter ubiquitous throughout the deep learning world: learning rate. There is likewise a ubiquitous question: what should that learning rate be? The true answer to this question is often tedious and time consuming to obtain, and a great deal of arcane knowledge has accumulated in recent years over how to pick and modify learning rates to achieve optimal training performance. Moreover, the long hours spent carefully crafting the perfect learning rate can come to nothing the moment your network architecture, optimizer, dataset, or initial conditions change ever so slightly. But it need not be this way. We propose a new answer to the great learning rate question: the Autonomous Learning Rate Controller. Find it at https://github.com/fastestimator/ARC/tree/v2.0.

Original languageEnglish
Pages (from-to)1679-1698
Number of pages20
JournalAnnals of Mathematics and Artificial Intelligence
Volume92
Issue number6
DOIs
Publication statusPublished - Dec 2024
Externally publishedYes

Keywords

  • AutoML
  • Deep learning
  • Learning rate
  • Optimization

Fingerprint

Dive into the research topics of 'To raise or not to raise: the autonomous learning rate question'. Together they form a unique fingerprint.

Cite this