Papers
arxiv:2506.01049

Taming LLMs by Scaling Learning Rates with Gradient Grouping

Published on Jun 1
ยท Submitted by ZedongWangAI on Jun 3
Authors:
,
,

Abstract

SGG, an optimizer wrapper, enhances adaptive learning rates for large language models by grouping gradients and applying cluster-specific scaling, improving convergence and stability.

AI-generated summary

Training large language models (LLMs) poses challenges due to their massive scale and heterogeneous architectures. While adaptive optimizers like AdamW help address gradient variations, they still struggle with efficient and effective parameter-wise learning rate estimation, resulting in training instability, slow convergence, and poor compatibility with parameter-efficient fine-tuning (PEFT) techniques. This work introduces Scaling with Gradient Grouping (SGG), an optimizer wrapper that improves adaptive learning rate estimation by dynamic grouping and group-specific scaling. SGG first groups gradient statistics in each layer into clusters and then applies cluster-specific scaling to calibrate learning rates for each parameter, thus imposing collective group-wise constraints while maintaining precise per-parameter adaptation. Experiments on diverse (M)LLM benchmarks show that SGG integrates seamlessly with existing optimizers, and offers consistent gains and faster convergence over baselines, with various model sizes. Its stability across varying batch sizes and learning rates establishes SGG as a robust choice for LLM optimization.

Community

Paper author Paper submitter

[ACL 2025 Main] Taming LLMs by Scaling Learning Rates with Gradient Grouping

Paper author

Welcome to discuss with the new optimization paradigm for LLM/MLLMs!
SGG_intro.png

Paper author

The overview of SGG algorithm, which can be plug-and-play with popular DNN optimizers.

SGG_algo.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.01049 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.01049 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.01049 in a Space README.md to link it from this page.

Collections including this paper 4