Update README.md
Browse files
README.md
CHANGED
@@ -1,20 +1,47 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
license_name: license
|
4 |
license_link: LICENSE
|
5 |
metrics:
|
6 |
- bleu
|
|
|
7 |
base_model:
|
8 |
-
-
|
9 |
pipeline_tag: translation
|
10 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
-
# Model Card for GemmaX2-28
|
14 |
|
15 |
-
## Model
|
16 |
-
|
17 |
-
### Model Description
|
18 |
|
19 |
GemmaX2-28-9B-v0.1 is an LLM-based translation model. It has been fintuned on GemmaX2-28-9B-Pretrain, which is a language model developed through continual pretraining of Gemma2-9B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find more details in our paper: [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/abs/2502.02481).
|
20 |
|
@@ -22,7 +49,7 @@ GemmaX2-28-9B-v0.1 is an LLM-based translation model. It has been fintuned on Ge
|
|
22 |
- **Developed by:** Xiaomi
|
23 |
- **Model type:** GemmaX2-28-9B-Pretrain is obtained by continually pretraining Gemma2-9B on a large amount of monolingual and parallel data. Subsequently, GemmaX2-28-9B-v0.1 is derived through supervised finetuning on a small set of high-quality translation instruction data.
|
24 |
- **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
|
25 |
-
|
26 |
|
27 |
### Model Source
|
28 |
|
|
|
1 |
---
|
2 |
+
license: gemma
|
3 |
license_name: license
|
4 |
license_link: LICENSE
|
5 |
metrics:
|
6 |
- bleu
|
7 |
+
- comet
|
8 |
base_model:
|
9 |
+
- ModelSpace/GemmaX2-28-9B-Pretrain
|
10 |
pipeline_tag: translation
|
11 |
library_name: transformers
|
12 |
+
language:
|
13 |
+
- ar
|
14 |
+
- bn
|
15 |
+
- cs
|
16 |
+
- de
|
17 |
+
- en
|
18 |
+
- es
|
19 |
+
- fa
|
20 |
+
- fr
|
21 |
+
- he
|
22 |
+
- hi
|
23 |
+
- id
|
24 |
+
- it
|
25 |
+
- ja
|
26 |
+
- km
|
27 |
+
- ko
|
28 |
+
- lo
|
29 |
+
- ms
|
30 |
+
- my
|
31 |
+
- nl
|
32 |
+
- pl
|
33 |
+
- pt
|
34 |
+
- ru
|
35 |
+
- th
|
36 |
+
- tl
|
37 |
+
- tr
|
38 |
+
- ur
|
39 |
+
- vi
|
40 |
+
- zh
|
41 |
---
|
42 |
|
|
|
43 |
|
44 |
+
## Model Description
|
|
|
|
|
45 |
|
46 |
GemmaX2-28-9B-v0.1 is an LLM-based translation model. It has been fintuned on GemmaX2-28-9B-Pretrain, which is a language model developed through continual pretraining of Gemma2-9B using a mix of 56 billion tokens from both monolingual and parallel data across 28 different languages. Please find more details in our paper: [Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study](https://arxiv.org/abs/2502.02481).
|
47 |
|
|
|
49 |
- **Developed by:** Xiaomi
|
50 |
- **Model type:** GemmaX2-28-9B-Pretrain is obtained by continually pretraining Gemma2-9B on a large amount of monolingual and parallel data. Subsequently, GemmaX2-28-9B-v0.1 is derived through supervised finetuning on a small set of high-quality translation instruction data.
|
51 |
- **Languages:** Arabic, Bengali, Czech, German, English, Spanish, Persian, French, Hebrew, Hindi, Indonesian, Italian, Japanese, Khmer, Korean, Lao, Malay, Burmese, Dutch, polish, Portuguese, Russian, Thai, Tagalog, Turkish, Urdu, Vietnamese, Chinese.
|
52 |
+
|
53 |
|
54 |
### Model Source
|
55 |
|