Gemstones: A Model Suite for Multi-Faceted Scaling Laws
Abstract
Scaling laws are typically fit using a family of models with a narrow range of frozen hyper-parameter choices. In this work we study scaling laws using a wide range of architecture and hyper-parameter choices, and highlight their impact on resulting prescriptions. As a primary artifact of our research, we release the Gemstones: the most comprehensive open-source scaling law dataset to date, consisting of over 4000 checkpoints from transformers with up to 2 billion parameters; these models have been trained with different learning rates, cooldown schedules, and architectural shapes. Our checkpoints enable more complex studies of scaling, such as a law that predicts language modeling performance as a function of model width and depth. By examining the various facets of our model suite, we find that the prescriptions of scaling laws can be highly sensitive to the experimental design process and the specific model checkpoints used during fitting. Code: https://github.com/mcleish7/gemstone-scaling-laws
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Scaling Inference-Efficient Language Models (2025)
- Scaling Laws for Differentially Private Language Models (2025)
- AlphaZero Neural Scaling and Zipf's Law: a Tale of Board Games and Power Laws (2024)
- Scaling Laws for Upcycling Mixture-of-Experts Language Models (2025)
- The Journey Matters: Average Parameter Count over Pre-training Unifies Sparse and Dense Scaling Laws (2025)
- Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient (2025)
- ScaMo: Exploring the Scaling Law in Autoregressive Motion Generation Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper