Scaling Analysis of Interleaved Speech-Text Language Models
Abstract
Existing Speech Language Model (SLM) scaling analysis paints a bleak picture. They predict that SLMs require much more compute and data compared to text, leading some to question the feasibility of training high-quality SLMs. However, modern SLMs are often initialised from pre-trained TextLMs using speech-text interleaving to allow knowledge transfer. This raises the question - Do interleaved SLMs scale more efficiently than textless-SLMs? In this paper we answer a resounding, yes! We conduct scaling analysis of interleaved SLMs by training several dozen and analysing the scaling trends. We see that under this setup SLMs scale more efficiently with compute. Additionally, our results indicate that the scaling-dynamics are significantly different than textless-SLMs, suggesting one should allocate notably more of the compute budget for increasing model size over training tokens. We also study the role of synthetic data and TextLM model families in unlocking this potential. Results suggest, that our scaled up model achieves comparable performance with leading models on speech semantic metrics while using less compute and data than other approaches. We open source models, samples, and data - https://pages.cs.huji.ac.il/adiyoss-lab/sims.
Community
models and eval dataset - https://huggingface.co/collections/slprl/sims-67ecea7521ff9740ff456c5e
Samples - https://pages.cs.huji.ac.il/adiyoss-lab/sims/
Code - https://github.com/slp-rl/slamkit
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Slamming: Training a Speech Language Model on One GPU in a Day (2025)
- Text-Speech Language Models with Improved Cross-Modal Transfer by Aligning Abstraction Levels (2025)
- Balancing Speech Understanding and Generation Using Continual Pre-training for Codec-based Speech LLM (2025)
- From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM (2025)
- Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis (2025)
- When Large Language Models Meet Speech: A Survey on Integration Approaches (2025)
- InSerter: Speech Instruction Following with Unsupervised Interleaved Pre-training (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper