Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,8 @@ base_model: google/gemma-2-9b
|
|
22 |
</div>
|
23 |
|
24 |
# Gemma-SEA-LION-v3-9B
|
25 |
-
|
|
|
26 |
|
27 |
Gemma-SEA-LION-v3-9B is a multilingual model which has undergone continued pre-training on approximately **200B** tokens across the 11 official Southeast Asian languages: English, Chinese, Vietnamese, Indonesian, Thai, Tamil, Filipino, Malay, Khmer, Lao, Burmese.
|
28 |
|
@@ -44,7 +45,7 @@ For tokenisation, the model employs the default tokenizer used in Gemma 2 9B.
|
|
44 |
We evaluated Gemma-SEA-LION-v3-9B on general language capabilities.
|
45 |
|
46 |
#### General Language Capabilities
|
47 |
-
For the evaluation of general language capabilities, we employed the [SEA
|
48 |
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
|
49 |
|
50 |
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
|
|
|
22 |
</div>
|
23 |
|
24 |
# Gemma-SEA-LION-v3-9B
|
25 |
+
|
26 |
+
[SEA-LION](https://arxiv.org/abs/2504.05747) is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
|
27 |
|
28 |
Gemma-SEA-LION-v3-9B is a multilingual model which has undergone continued pre-training on approximately **200B** tokens across the 11 official Southeast Asian languages: English, Chinese, Vietnamese, Indonesian, Thai, Tamil, Filipino, Malay, Khmer, Lao, Burmese.
|
29 |
|
|
|
45 |
We evaluated Gemma-SEA-LION-v3-9B on general language capabilities.
|
46 |
|
47 |
#### General Language Capabilities
|
48 |
+
For the evaluation of general language capabilities, we employed the [SEA-HELM evaluation benchmark](https://arxiv.org/abs/2502.14301) across a variety of tasks.
|
49 |
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
|
50 |
|
51 |
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
|