Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources
Abstract
Large Language Models (LLMs) exhibit significant disparities in performance across languages, primarily benefiting high-resource languages while marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as a promising approach to address this imbalance, although the relative effectiveness of monolingual, bilingual, and code-augmented data strategies remains unclear. This study systematically evaluates 36 CPT configurations involving three multilingual base models, across 30+ languages categorized as altruistic, selfish, and stagnant, spanning various resource levels. Our findings reveal three major insights: (1) Bilingual CPT improves multilingual classification but often causes language mixing issues during generation. (2) Including programming code data during CPT consistently enhances multilingual classification accuracy, particularly benefiting low-resource languages, but introduces a trade-off by slightly degrading generation quality. (3) Contrary to prior work, we observe substantial deviations from language classifications according to their impact on cross-lingual transfer: Languages classified as altruistic often negatively affect related languages, selfish languages show conditional and configuration-dependent behavior, and stagnant languages demonstrate surprising adaptability under certain CPT conditions. These nuanced interactions emphasize the complexity of multilingual representation learning, underscoring the importance of systematic studies on generalizable language classification to inform future multilingual CPT strategies.
Community
Large Language Models (LLMs) exhibit significant disparities in performance
across languages, primarily benefiting high-resource languages while
marginalizing underrepresented ones. Continual Pretraining (CPT) has emerged as
a promising approach to address this imbalance, although the relative
effectiveness of monolingual, bilingual, and code-augmented data strategies
remains unclear. This study systematically evaluates 36 CPT configurations
involving three multilingual base models, across 30+ languages categorized as
altruistic, selfish, and stagnant, spanning various resource levels. Our
findings reveal three major insights: (1) Bilingual CPT improves multilingual
classification but often causes language mixing issues during generation. (2)
Including programming code data during CPT consistently enhances multilingual
classification accuracy, particularly benefiting low-resource languages, but
introduces a trade-off by slightly degrading generation quality. (3) Contrary
to prior work, we observe substantial deviations from language classifications
according to their impact on cross-lingual transfer: Languages classified as
altruistic often negatively affect related languages, selfish languages show
conditional and configuration-dependent behavior, and stagnant languages
demonstrate surprising adaptability under certain CPT conditions. These nuanced
interactions emphasize the complexity of multilingual representation learning,
underscoring the importance of systematic studies on generalizable language
classification to inform future multilingual CPT strategies.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Investigating and Scaling up Code-Switching for Multilingual Language Model Pre-Training (2025)
- Multilingual Language Model Pretraining using Machine-translated Data (2025)
- Is LLM the Silver Bullet to Low-Resource Languages Machine Translation? (2025)
- Scaling Test-time Compute for Low-resource Languages: Multilingual Reasoning in LLMs (2025)
- Enhancing Small Language Models for Cross-Lingual Generalized Zero-Shot Classification with Soft Prompt Tuning (2025)
- LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy (2025)
- MMLU-ProX: A Multilingual Benchmark for Advanced Large Language Model Evaluation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper