Malaysian Qwen 2.5 7B Instruct Reasoning SFT

Continue finetuning https://huggingface.co/mesolitica/Malaysian-Qwen2.5-7B-Instruct on highly curated Malaysian Reasoning dataset.

Improvement

  1. Reasoning on Math, Science, Translation, Dialects, Multiple choices, coding and Maktabah Al Bakri.
  2. Warmup reasoning.

Training session

Finetune on mesolitica/Malaysian-Reasoning to make the model better reasoning on Malaysian context.

How we train

  1. Full parameters on 12k context length.
  2. WanDB at https://wandb.ai/huseinzol05/fpf-qwen2.5-7b-malaysian-12k-reasoning

Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5

Benchmark

Dialect Translation

All the benchmarks generate using vLLM, evaluation based on sacrebleu CHRF max@5.

Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-dialect

Dialect to standard Malay,

From: johor To: malay, score: 53.80150476295704
From: kedah To: malay, score: 59.656783756383994
From: pahang To: malay, score: 54.613757432364295
From: negeri sembilan To: malay, score: 51.792391481041555
From: kelantan To: malay, score: 51.31613831154433
From: penang To: malay, score: 55.92313437424021
From: melaka To: malay, score: 49.89166863202441
average: 53.85648267865083

Standard Malay to dialect,

From: malay To: johor, score: 52.16692422736562
From: malay To: kedah, score: 53.25312749019
From: malay To: pahang, score: 53.94390758910295
From: malay To: negeri sembilan, score: 36.27480628354344
From: malay To: kelantan, score: 37.559315524669195
From: malay To: penang, score: 53.41504444742441
From: malay To: melaka, score: 65.06211074823777
average: 50.239319472933346

MalayMMLU

Source code for evaluation at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5/evaluate-malaymmlu

Evaluation based on Accuracy@1,


Evaluation based on Accuracy@5,


Special thanks

Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!

Downloads last month
372
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mesolitica/Malaysian-Qwen2.5-7B-Reasoning-SFT

Finetuned
(2)
this model
Finetunes
1 model
Quantizations
2 models

Dataset used to train mesolitica/Malaysian-Qwen2.5-7B-Reasoning-SFT

Collection including mesolitica/Malaysian-Qwen2.5-7B-Reasoning-SFT