Slamming: Training a Speech Language Model on One GPU in a Day
Abstract
We introduce Slam, a recipe for training high-quality Speech Language Models (SLMs) on a single academic GPU in 24 hours. We do so through empirical analysis of model initialisation and architecture, synthetic training data, preference optimisation with synthetic data and tweaking all other components. We empirically demonstrate that this training recipe also scales well with more compute getting results on par with leading SLMs in a fraction of the compute cost. We hope these insights will make SLM training and research more accessible. In the context of SLM scaling laws, our results far outperform predicted compute optimal performance, giving an optimistic view to SLM feasibility. See code, data, models, samples at - https://pages.cs.huji.ac.il/adiyoss-lab/slamming .
Community
š Project page - https://pages.cs.huji.ac.il/adiyoss-lab/slamming/
š Paper - https://arxiv.org/abs/2502.15814
š» Code - https://github.com/slp-rl/slamkit
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SLIDE: Integrating Speech Language Model with LLM for Spontaneous Spoken Dialogue Generation (2025)
- Scaling Laws for Upcycling Mixture-of-Experts Language Models (2025)
- Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis (2025)
- ESPnet-SpeechLM: An Open Speech Language Model Toolkit (2025)
- 2 OLMo 2 Furious (2024)
- Scalable Vision Language Model Training via High Quality Data Curation (2025)
- OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper