GemmaX2
Collection
GemmaX2 language models, including pretrained and instruction-tuned models of 2 sizes, including 2B, 9B.
โข
7 items
โข
Updated
โข
18
GemmaX2-28-2B-Pretrain is a language model developed through continual pretraining of Gemma2-2B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find more details in our paper: Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study.
Note that GemmaX2-28-2B-Pretrain is NOT translation model.
We collect monolingual data from CulturaX and MADLAD-400. For parallel data, we collect all Chinese-centric and English-centric parallel datasets from the OPUS collection up to August 2024 and conduct a series of filtering processes, such as language identification, semantic duplication filtering, quality filtering, and more.
@misc{cui2025multilingualmachinetranslationopen,
title={Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study},
author={Menglong Cui and Pengzhi Gao and Wei Liu and Jian Luan and Bin Wang},
year={2025},
eprint={2502.02481},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02481},
}