v000000 leaderboard-pr-bot commited on
Commit
a02b06b
1 Parent(s): 34ed087

Adding Evaluation Results (#2)

Browse files

- Adding Evaluation Results (2e869d0020e754e4a7c0e89cb1fae2547e4e786e)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +120 -12
README.md CHANGED
@@ -1,12 +1,7 @@
1
  ---
2
- datasets:
3
- - jondurbin/gutenberg-dpo-v0.1
4
- - Qwen/Qwen2.5-14B-Instruct
5
- - HuggingFaceH4/ultrafeedback_binarized
6
- base_model:
7
- - Qwen/Qwen2.5-14B-Instruct
8
- - v000000/Qwen2.5-14B-Gutenberg-1e-Delta
9
- - tanliboy/lambda-qwen2.5-14b-dpo-test
10
  library_name: transformers
11
  tags:
12
  - qwen
@@ -21,10 +16,110 @@ tags:
21
  - storywriting
22
  - roleplay
23
  - novelwriting
24
- license: apache-2.0
25
- language:
26
- - en
 
 
 
 
 
27
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
 
30
  # Qwen2.5-Lumen-14B
@@ -226,4 +321,17 @@ The following models were included in the merge:
226
  -------------------------------------------------------------------------------
227
 
228
  - Context Length: Full 131,072 tokens and generation 8192 tokens
229
- - Qwen2(ChatML) Prompt format
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
 
 
 
 
 
5
  library_name: transformers
6
  tags:
7
  - qwen
 
16
  - storywriting
17
  - roleplay
18
  - novelwriting
19
+ base_model:
20
+ - Qwen/Qwen2.5-14B-Instruct
21
+ - v000000/Qwen2.5-14B-Gutenberg-1e-Delta
22
+ - tanliboy/lambda-qwen2.5-14b-dpo-test
23
+ datasets:
24
+ - jondurbin/gutenberg-dpo-v0.1
25
+ - Qwen/Qwen2.5-14B-Instruct
26
+ - HuggingFaceH4/ultrafeedback_binarized
27
  pipeline_tag: text-generation
28
+ model-index:
29
+ - name: Qwen2.5-Lumen-14B
30
+ results:
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: IFEval (0-Shot)
36
+ type: HuggingFaceH4/ifeval
37
+ args:
38
+ num_few_shot: 0
39
+ metrics:
40
+ - type: inst_level_strict_acc and prompt_level_strict_acc
41
+ value: 80.64
42
+ name: strict accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: BBH (3-Shot)
51
+ type: BBH
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc_norm
56
+ value: 48.51
57
+ name: normalized accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: MATH Lvl 5 (4-Shot)
66
+ type: hendrycks/competition_math
67
+ args:
68
+ num_few_shot: 4
69
+ metrics:
70
+ - type: exact_match
71
+ value: 0.0
72
+ name: exact match
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: GPQA (0-shot)
81
+ type: Idavidrein/gpqa
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 10.4
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MuSR (0-shot)
96
+ type: TAUR-Lab/MuSR
97
+ args:
98
+ num_few_shot: 0
99
+ metrics:
100
+ - type: acc_norm
101
+ value: 10.29
102
+ name: acc_norm
103
+ source:
104
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: MMLU-PRO (5-shot)
111
+ type: TIGER-Lab/MMLU-Pro
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 43.36
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=v000000/Qwen2.5-Lumen-14B
122
+ name: Open LLM Leaderboard
123
  ---
124
 
125
  # Qwen2.5-Lumen-14B
 
321
  -------------------------------------------------------------------------------
322
 
323
  - Context Length: Full 131,072 tokens and generation 8192 tokens
324
+ - Qwen2(ChatML) Prompt format
325
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
326
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_v000000__Qwen2.5-Lumen-14B)
327
+
328
+ | Metric |Value|
329
+ |-------------------|----:|
330
+ |Avg. |32.20|
331
+ |IFEval (0-Shot) |80.64|
332
+ |BBH (3-Shot) |48.51|
333
+ |MATH Lvl 5 (4-Shot)| 0.00|
334
+ |GPQA (0-shot) |10.40|
335
+ |MuSR (0-shot) |10.29|
336
+ |MMLU-PRO (5-shot) |43.36|
337
+