Papers
arxiv:2505.24452

Stepsize anything: A unified learning rate schedule for budgeted-iteration training

Published on May 30
· Submitted by Taoer on Jun 3
Authors:
,
,
,

Abstract

A unified budget-aware learning rate schedule is proposed to optimize training within limited iteration budgets, outperforming traditional schedules across various tasks and network architectures.

AI-generated summary

The expanding computational costs and limited resources underscore the critical need for budgeted-iteration training, which aims to achieve optimal learning within predetermined iteration budgets.While learning rate schedules fundamentally govern the performance of different networks and tasks, particularly in budgeted-iteration scenarios, their design remains largely heuristic, lacking theoretical foundations.In addition, the optimal learning rate schedule requires extensive trial-and-error selection, making the training process inefficient.In this work, we propose the Unified Budget-Aware (UBA) schedule, a theoretically grounded learning rate schedule that consistently outperforms commonly-used schedules among diverse architectures and tasks under different constrained training budgets.First, we bridge the gap by constructing a novel training budget-aware optimization framework, which explicitly accounts for the robustness to landscape curvature variations.From this framework, we derive the UBA schedule, controlled by a single hyper-parameter varphi that provides a trade-off between flexibility and simplicity, eliminating the need for per-network numerical optimization. Moreover, we establish a theoretical connection between varphi and the condition number, adding interpretation and justification to our approach. Besides, we prove the convergence for different values of varphi.We offer practical guidelines for its selection via theoretical analysis and empirical results.xtensive experimental results show that UBA consistently surpasses the commonly-used schedules across diverse vision and language tasks, spanning network architectures (e.g., ResNet, OLMo) and scales, under different training-iteration budgets.

Community

Paper author Paper submitter

The increasing computational costs highlight the need for budgeted-iteration training. Current learning rate schedules are mostly heuristic and inefficient. This work introduces the Unified Budget-Aware (UBA) schedule, a theoretically grounded approach that outperforms traditional schedules across various architectures and tasks. The UBA schedule is controlled by a single hyper-parameter, \phi, which balances flexibility and simplicity, and is backed by theoretical analysis and empirical results showing its effectiveness under constrained training budgets.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.24452 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.24452 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.24452 in a Space README.md to link it from this page.

Collections including this paper 1