File size: 2,663 Bytes
98c0091 04d0559 98c0091 e30eeae 04d0559 e30eeae 04d0559 e30eeae f2339b6 04d0559 98c0091 04d0559 98c0091 731c350 98c0091 731c350 04d0559 98c0091 9a4f1c7 98c0091 3c36926 98c0091 6825e0d 98c0091 9a4f1c7 98c0091 e21c9db 98c0091 731c350 9a4f1c7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
license: gemma
license_name: license
license_link: LICENSE
metrics:
- bleu
- comet
base_model:
- ModelSpace/GemmaX2-28-9B-Pretrain
pipeline_tag: translation
library_name: transformers
language:
- ar
- bn
- cs
- de
- en
- es
- fa
- fr
- he
- hi
- id
- it
- ja
- km
- ko
- lo
- ms
- my
- nl
- pl
- pt
- ru
- th
- tl
- tr
- ur
- vi
- zh
---
## Model Description
GemmaX2-28-9B-v0.1 is an LLM-based translation model. It has been fintuned on GemmaX2-28-9B-Pretrain, which is a language model developed through continual pretraining of Gemma2-9B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find more details in our paper: [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/abs/2502.02481).
- **Developed by:** Xiaomi
- **Model type:** GemmaX2-28-9B-Pretrain is obtained by continually pretraining Gemma2-9B on a large amount of monolingual and parallel data. Subsequently, GemmaX2-28-9B-v0.1 is derived through supervised finetuning on a small set of high-quality translation instruction data.
- **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
## Model Performance

## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "ModelSpace/GemmaX2-28-9B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Translate this from Chinese to English:\nChinese: 我爱机器翻译\nEnglish:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
```bibtex
@misc{cui2025multilingualmachinetranslationopen,
title={Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study},
author={Menglong Cui and Pengzhi Gao and Wei Liu and Jian Luan and Bin Wang},
year={2025},
eprint={2502.02481},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02481},
}
```
## Limitations
GemmaX2-28-9B-v0.1 only supports the 28 languages listed above and does not guarantee strong translation performance for other languages. We will continue to enhance the translation performance of GemmaX2-28-9B, and future models will be released in due course.
|