Stepsize anything: A unified learning rate schedule for budgeted-iteration training
Abstract
A unified budget-aware learning rate schedule is proposed to optimize training within limited iteration budgets, outperforming traditional schedules across various tasks and network architectures.
The expanding computational costs and limited resources underscore the critical need for budgeted-iteration training, which aims to achieve optimal learning within predetermined iteration budgets.While learning rate schedules fundamentally govern the performance of different networks and tasks, particularly in budgeted-iteration scenarios, their design remains largely heuristic, lacking theoretical foundations.In addition, the optimal learning rate schedule requires extensive trial-and-error selection, making the training process inefficient.In this work, we propose the Unified Budget-Aware (UBA) schedule, a theoretically grounded learning rate schedule that consistently outperforms commonly-used schedules among diverse architectures and tasks under different constrained training budgets.First, we bridge the gap by constructing a novel training budget-aware optimization framework, which explicitly accounts for the robustness to landscape curvature variations.From this framework, we derive the UBA schedule, controlled by a single hyper-parameter varphi that provides a trade-off between flexibility and simplicity, eliminating the need for per-network numerical optimization. Moreover, we establish a theoretical connection between varphi and the condition number, adding interpretation and justification to our approach. Besides, we prove the convergence for different values of varphi.We offer practical guidelines for its selection via theoretical analysis and empirical results.xtensive experimental results show that UBA consistently surpasses the commonly-used schedules across diverse vision and language tasks, spanning network architectures (e.g., ResNet, OLMo) and scales, under different training-iteration budgets.
Community
The increasing computational costs highlight the need for budgeted-iteration training. Current learning rate schedules are mostly heuristic and inefficient. This work introduces the Unified Budget-Aware (UBA) schedule, a theoretically grounded approach that outperforms traditional schedules across various architectures and tasks. The UBA schedule is controlled by a single hyper-parameter, \phi, which balances flexibility and simplicity, and is backed by theoretical analysis and empirical results showing its effectiveness under constrained training budgets.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Taming LLMs by Scaling Learning Rates with Gradient Grouping (2025)
- Tuning Learning Rates with the Cumulative-Learning Constant (2025)
- Budget-Adaptive Adapter Tuning in Orthogonal Subspaces for Continual Learning in LLMs (2025)
- FedHL: Federated Learning for Heterogeneous Low-Rank Adaptation via Unbiased Aggregation (2025)
- Optimization-Inspired Few-Shot Adaptation for Large Language Models (2025)
- Continuous Subspace Optimization for Continual Learning (2025)
- GRAPE: Optimize Data Mixture for Group Robust Multi-target Adaptive Pretraining (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper