lbourdois commited on
Commit
172d8a2
·
verified ·
1 Parent(s): 63d8cd9

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +142 -128
README.md CHANGED
@@ -1,128 +1,142 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-0.5B-Instruct
4
- datasets:
5
- - Augmentation-Scaling-Laws/math-seed-data
6
- library_name: transformers
7
- license: apache-2.0
8
- pipeline_tag: text-generation
9
- model-index:
10
- - name: ECE-Qwen0.5B-FT-V2
11
- results:
12
- - task:
13
- type: text-generation
14
- name: Text Generation
15
- dataset:
16
- name: IFEval (0-Shot)
17
- type: HuggingFaceH4/ifeval
18
- args:
19
- num_few_shot: 0
20
- metrics:
21
- - type: inst_level_strict_acc and prompt_level_strict_acc
22
- value: 25.26
23
- name: strict accuracy
24
- source:
25
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
26
- name: Open LLM Leaderboard
27
- - task:
28
- type: text-generation
29
- name: Text Generation
30
- dataset:
31
- name: BBH (3-Shot)
32
- type: BBH
33
- args:
34
- num_few_shot: 3
35
- metrics:
36
- - type: acc_norm
37
- value: 7.63
38
- name: normalized accuracy
39
- source:
40
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
41
- name: Open LLM Leaderboard
42
- - task:
43
- type: text-generation
44
- name: Text Generation
45
- dataset:
46
- name: MATH Lvl 5 (4-Shot)
47
- type: hendrycks/competition_math
48
- args:
49
- num_few_shot: 4
50
- metrics:
51
- - type: exact_match
52
- value: 1.21
53
- name: exact match
54
- source:
55
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
56
- name: Open LLM Leaderboard
57
- - task:
58
- type: text-generation
59
- name: Text Generation
60
- dataset:
61
- name: GPQA (0-shot)
62
- type: Idavidrein/gpqa
63
- args:
64
- num_few_shot: 0
65
- metrics:
66
- - type: acc_norm
67
- value: 2.24
68
- name: acc_norm
69
- source:
70
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
71
- name: Open LLM Leaderboard
72
- - task:
73
- type: text-generation
74
- name: Text Generation
75
- dataset:
76
- name: MuSR (0-shot)
77
- type: TAUR-Lab/MuSR
78
- args:
79
- num_few_shot: 0
80
- metrics:
81
- - type: acc_norm
82
- value: 0.89
83
- name: acc_norm
84
- source:
85
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
86
- name: Open LLM Leaderboard
87
- - task:
88
- type: text-generation
89
- name: Text Generation
90
- dataset:
91
- name: MMLU-PRO (5-shot)
92
- type: TIGER-Lab/MMLU-Pro
93
- config: main
94
- split: test
95
- args:
96
- num_few_shot: 5
97
- metrics:
98
- - type: acc
99
- value: 7.4
100
- name: accuracy
101
- source:
102
- url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
103
- name: Open LLM Leaderboard
104
- ---
105
-
106
-
107
- ### Model Description
108
-
109
- The model you’re using is based on Qwen/Qwen2.5-0.5B-Instruct, a powerful AI designed to follow instructions across a wide range of tasks. Through specialized fine-tuning, this model has been trained to become highly proficient in solving complex mathematical problems. By using a dataset specifically focused on math (Augmentation-Scaling-Laws/math-seed-data), it has gained the ability to handle advanced calculations and mathematical reasoning, making it an ideal assistant for anyone needing help with math-related tasks or challenges.
110
-
111
- - **Developed by:** Youri Lalain (@Youlln)
112
- - **Organization:** ECE engineering school
113
-
114
-
115
-
116
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
117
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Youlln__ECE-Qwen0.5B-FT-V2)
118
-
119
- | Metric |Value|
120
- |-------------------|----:|
121
- |Avg. | 7.44|
122
- |IFEval (0-Shot) |25.26|
123
- |BBH (3-Shot) | 7.63|
124
- |MATH Lvl 5 (4-Shot)| 1.21|
125
- |GPQA (0-shot) | 2.24|
126
- |MuSR (0-shot) | 0.89|
127
- |MMLU-PRO (5-shot) | 7.40|
128
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-0.5B-Instruct
4
+ datasets:
5
+ - Augmentation-Scaling-Laws/math-seed-data
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ pipeline_tag: text-generation
9
+ language:
10
+ - zho
11
+ - eng
12
+ - fra
13
+ - spa
14
+ - por
15
+ - deu
16
+ - ita
17
+ - rus
18
+ - jpn
19
+ - kor
20
+ - vie
21
+ - tha
22
+ - ara
23
+ model-index:
24
+ - name: ECE-Qwen0.5B-FT-V2
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: IFEval (0-Shot)
31
+ type: HuggingFaceH4/ifeval
32
+ args:
33
+ num_few_shot: 0
34
+ metrics:
35
+ - type: inst_level_strict_acc and prompt_level_strict_acc
36
+ value: 25.26
37
+ name: strict accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: BBH (3-Shot)
46
+ type: BBH
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 7.63
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MATH Lvl 5 (4-Shot)
61
+ type: hendrycks/competition_math
62
+ args:
63
+ num_few_shot: 4
64
+ metrics:
65
+ - type: exact_match
66
+ value: 1.21
67
+ name: exact match
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: GPQA (0-shot)
76
+ type: Idavidrein/gpqa
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 2.24
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 0.89
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 7.4
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Youlln/ECE-Qwen0.5B-FT-V2
117
+ name: Open LLM Leaderboard
118
+ ---
119
+
120
+
121
+ ### Model Description
122
+
123
+ The model you’re using is based on Qwen/Qwen2.5-0.5B-Instruct, a powerful AI designed to follow instructions across a wide range of tasks. Through specialized fine-tuning, this model has been trained to become highly proficient in solving complex mathematical problems. By using a dataset specifically focused on math (Augmentation-Scaling-Laws/math-seed-data), it has gained the ability to handle advanced calculations and mathematical reasoning, making it an ideal assistant for anyone needing help with math-related tasks or challenges.
124
+
125
+ - **Developed by:** Youri Lalain (@Youlln)
126
+ - **Organization:** ECE engineering school
127
+
128
+
129
+
130
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
131
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Youlln__ECE-Qwen0.5B-FT-V2)
132
+
133
+ | Metric |Value|
134
+ |-------------------|----:|
135
+ |Avg. | 7.44|
136
+ |IFEval (0-Shot) |25.26|
137
+ |BBH (3-Shot) | 7.63|
138
+ |MATH Lvl 5 (4-Shot)| 1.21|
139
+ |GPQA (0-shot) | 2.24|
140
+ |MuSR (0-shot) | 0.89|
141
+ |MMLU-PRO (5-shot) | 7.40|
142
+