Update README.md
Browse files
README.md
CHANGED
@@ -15,60 +15,17 @@ pipeline_tag: text-generation
|
|
15 |
|
16 |
---
|
17 |
|
18 |
-
# Luth-0.6B-Instruct
|
19 |
|
20 |
**Luth-0.6B-Instruct** is a French fine-tuned version of [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.
|
21 |
|
22 |
-
|
23 |
|
24 |
-
##
|
25 |
|
26 |
-
|
27 |
|
28 |
-
## Benchmark Results
|
29 |
-
|
30 |
-
We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a `temperature=0`.
|
31 |
-
|
32 |
-
### Evaluation Visualizations
|
33 |
-
|
34 |
-
**French Evaluation:**
|
35 |
-
|
36 |
-

|
37 |
-
|
38 |
-
**English Evaluation:**
|
39 |
-
|
40 |
-

|
41 |
-
|
42 |
-
### French Benchmark Scores
|
43 |
-
|
44 |
-
| Benchmark | Qwen3-0.6B | Qwen2.5-0.5B-Instruct | Luth-0.6B-Instruct |
|
45 |
-
|-------------------|------------------|-----------------------|-----------------|
|
46 |
-
| ifeval-fr | 44.45 | 22.18 | <u>48.24</u> |
|
47 |
-
| gpqa-diamond-fr | 28.93 | 23.86 | <u>33.50</u> |
|
48 |
-
| mmlu-fr | 27.16 | 35.04 | <u>40.23</u> |
|
49 |
-
| math-500-fr | 29.20 | 10.00 | <u>43.00</u> |
|
50 |
-
| arc-chall-fr | 31.31 | 28.23 | <u>33.88</u> |
|
51 |
-
| hellaswag-fr | 25.11 | <u>51.45</u> | 45.70 |
|
52 |
-
|
53 |
-
### English Benchmark Scores
|
54 |
-
|
55 |
-
| Benchmark | Qwen3-0.6B | Qwen2.5-0.5B-Instruct | Luth-0.6B-Instruct |
|
56 |
-
|-------------------|------------------|-----------------------|-----------------|
|
57 |
-
| ifeval-en | <u>57.86</u> | 29.21 | 53.97 |
|
58 |
-
| gpqa-diamond-en | <u>29.80</u> | 26.77 | 28.28 |
|
59 |
-
| mmlu-en | 36.85 | 43.80 | <u>48.10</u> |
|
60 |
-
| math-500-en | 45.00 | 31.80 | <u>47.80</u> |
|
61 |
-
| arc-chall-en | 33.62 | 32.17 | <u>35.92</u> |
|
62 |
-
| hellaswag-en | 42.91 | <u>49.56</u> | 46.96 |
|
63 |
-
|
64 |
-
## Citation
|
65 |
-
|
66 |
-
```bibtex
|
67 |
-
@misc{luth2025kurakurai,
|
68 |
-
title = {Luth-0.6B-Instruct},
|
69 |
-
author = {Kurakura AI Team},
|
70 |
-
year = {2025},
|
71 |
-
howpublished = {\url{https://huggingface.co/kurakurai/Luth-0.6B}},
|
72 |
-
note = {Qwen3-0.6B fine-tuned on French datasets}
|
73 |
-
}
|
74 |
```
|
|
|
|
|
|
|
|
15 |
|
16 |
---
|
17 |
|
18 |
+
# Luth-0.6B-Instruct-GGUF
|
19 |
|
20 |
**Luth-0.6B-Instruct** is a French fine-tuned version of [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), trained on the [Luth-SFT](https://huggingface.co/datasets/kurakurai/luth-sft) dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.
|
21 |
|
22 |
+
Find more details in the original model card: https://huggingface.co/kurakurai/Luth-0.6B-Instruct
|
23 |
|
24 |
+
## 🏃 How to run Luth
|
25 |
|
26 |
+
Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
```
|
29 |
+
llama-cli -hf kurakurai/Luth-0.6B-Instruct-GGUF
|
30 |
+
```
|
31 |
+
|