hawei commited on
Commit
4beffcc
·
verified ·
1 Parent(s): 54af325

Add paper link

Browse files
Files changed (1) hide show
  1. README.md +112 -109
README.md CHANGED
@@ -1,109 +1,112 @@
1
- ---
2
- license: llama3.1
3
- datasets:
4
- - nvidia/OpenMathInstruct-2
5
- language:
6
- - en
7
- base_model:
8
- - meta-llama/Llama-3.1-8B-Instruct
9
- model-index:
10
- - name: Control-LLM-Llama3.1-8B-Math16
11
- results:
12
- - task:
13
- type: math-evaluation
14
- dataset:
15
- type: parquet
16
- name: Math, Math Hard, GSM8K
17
- dataset_kwargs:
18
- data_files: "https://github.com/linkedin/ControlLLM/blob/main/src/controlllm/inference/llm_eval_harness/additional_tasks/math/joined_math.parquet"
19
- metrics:
20
- - name: exact_match,none
21
- type: exact_match
22
- value: 0.6205678398534606
23
- stderr: 0.005249520342473376
24
- verified: false
25
- - name: exact_match,none (gsm8k_0shot_instruct)
26
- type: exact_match
27
- value: 0.8968915845337376
28
- stderr: 0.008376436987507811
29
- verified: false
30
- - name: exact_match,none (meta_math_0shot_instruct)
31
- type: exact_match
32
- value: 0.6166
33
- stderr: 0.006876797660918556
34
- verified: false
35
- - name: exact_match,none (meta_math_hard_0shot_instruct)
36
- type: exact_match
37
- value: 0.36027190332326287
38
- stderr: 0.013198755610252931
39
- verified: false
40
- - task:
41
- type: original-capability
42
- dataset:
43
- type: meta/Llama-3.1-8B-Instruct-evals
44
- name: Llama-3.1-8B-Instruct-evals Dataset
45
- dataset_path: "meta-llama/llama-3.1-8_b-instruct-evals"
46
- dataset_name: "Llama-3.1-8B-Instruct-evals__arc_challenge__details"
47
- metrics:
48
- - name: exact_match,strict-match
49
- type: exact_match
50
- value: 0.6001372485281902
51
- stderr: 0.002821514831773572
52
- verified: false
53
- - name: exact_match,strict-match (meta_arc_0shot_instruct)
54
- type: exact_match
55
- value: 0.8248927038626609
56
- stderr: 0.011139722235859526
57
- verified: false
58
- - name: exact_match,strict-match (meta_gpqa_0shot_cot_instruct)
59
- type: exact_match
60
- value: 0.3080357142857143
61
- stderr: 0.021836780796366417
62
- verified: false
63
- - name: exact_match,strict-match (meta_mmlu_0shot_instruct)
64
- type: exact_match
65
- value: 0.7159948725252813
66
- stderr: 0.00380556397209409
67
- verified: false
68
- - name: exact_match,strict-match (meta_mmlu_pro_5shot_instruct)
69
- type: exact_match
70
- value: 0.45403922872340424
71
- stderr: 0.004539171007529716
72
- verified: false
73
- ---
74
- # Control-LLM-Llama3.1-8B-Math16
75
- This is a fine-tuned model of Llama-3.1-8B-Instruct for mathematical tasks on OpenMath2 dataset.
76
-
77
- ## Evaluation Results
78
- Here is an overview of the evaluation results and findings:
79
-
80
- ### Benchmark Results Table
81
- The table below summarizes evaluation results across mathematical tasks and original capabilities.
82
-
83
- | **Model** | **MH** | **M** | **G8K** | **M-Avg** | **ARC** | **GPQA** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
84
- |-------------------|--------|--------|---------|-----------|---------|----------|---------|----------|-----------|-------------|
85
- | Llama3.1-8B-Inst | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 56.3 |
86
- | **Control LLM*** | 36.0 | 61.7 | **89.7**| 62.5 | 82.5 | 30.8 | **71.6**| 45.4 | **57.6** | **60.0** |
87
-
88
- ---
89
- ### Explanation:
90
- - **MH**: MathHard
91
- - **M**: Math
92
- - **G8K**: GSM8K
93
- - **M-Avg**: Math - Average across MathHard, Math, and GSM8K
94
- - **ARC**: ARC benchmark
95
- - **GPQA**: General knowledge QA
96
- - **MLU**: MMLU (Massive Multitask Language Understanding)
97
- - **MLUP**: MMLU Pro
98
- - **O-Avg**: Original Capability - Average across ARC, GPQA, MMLU, and MLUP
99
- - **Overall**: Combined average across all tasks
100
-
101
- ### Catastrophic Forgetting on OpenMath
102
- The following plot illustrates and compares catastrophic forgetting mitigation during training
103
-
104
- ![Catastrophic Forgetting](plots/ControlLLM_CF_Plot_Math.png)
105
-
106
- ### Alignment Result
107
- The plot below highlights the alignment result of the model trained with Control LLM.
108
-
109
- ![Alignment](plots/alignment_best.png)
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ datasets:
4
+ - nvidia/OpenMathInstruct-2
5
+ language:
6
+ - en
7
+ base_model:
8
+ - meta-llama/Llama-3.1-8B-Instruct
9
+ model-index:
10
+ - name: Control-LLM-Llama3.1-8B-Math16
11
+ results:
12
+ - task:
13
+ type: math-evaluation
14
+ dataset:
15
+ type: parquet
16
+ name: Math, Math Hard, GSM8K
17
+ dataset_kwargs:
18
+ data_files: "https://github.com/linkedin/ControlLLM/blob/main/src/controlllm/inference/llm_eval_harness/additional_tasks/math/joined_math.parquet"
19
+ metrics:
20
+ - name: exact_match,none
21
+ type: exact_match
22
+ value: 0.6205678398534606
23
+ stderr: 0.005249520342473376
24
+ verified: false
25
+ - name: exact_match,none (gsm8k_0shot_instruct)
26
+ type: exact_match
27
+ value: 0.8968915845337376
28
+ stderr: 0.008376436987507811
29
+ verified: false
30
+ - name: exact_match,none (meta_math_0shot_instruct)
31
+ type: exact_match
32
+ value: 0.6166
33
+ stderr: 0.006876797660918556
34
+ verified: false
35
+ - name: exact_match,none (meta_math_hard_0shot_instruct)
36
+ type: exact_match
37
+ value: 0.36027190332326287
38
+ stderr: 0.013198755610252931
39
+ verified: false
40
+ - task:
41
+ type: original-capability
42
+ dataset:
43
+ type: meta/Llama-3.1-8B-Instruct-evals
44
+ name: Llama-3.1-8B-Instruct-evals Dataset
45
+ dataset_path: "meta-llama/llama-3.1-8_b-instruct-evals"
46
+ dataset_name: "Llama-3.1-8B-Instruct-evals__arc_challenge__details"
47
+ metrics:
48
+ - name: exact_match,strict-match
49
+ type: exact_match
50
+ value: 0.6001372485281902
51
+ stderr: 0.002821514831773572
52
+ verified: false
53
+ - name: exact_match,strict-match (meta_arc_0shot_instruct)
54
+ type: exact_match
55
+ value: 0.8248927038626609
56
+ stderr: 0.011139722235859526
57
+ verified: false
58
+ - name: exact_match,strict-match (meta_gpqa_0shot_cot_instruct)
59
+ type: exact_match
60
+ value: 0.3080357142857143
61
+ stderr: 0.021836780796366417
62
+ verified: false
63
+ - name: exact_match,strict-match (meta_mmlu_0shot_instruct)
64
+ type: exact_match
65
+ value: 0.7159948725252813
66
+ stderr: 0.00380556397209409
67
+ verified: false
68
+ - name: exact_match,strict-match (meta_mmlu_pro_5shot_instruct)
69
+ type: exact_match
70
+ value: 0.45403922872340424
71
+ stderr: 0.004539171007529716
72
+ verified: false
73
+ ---
74
+ # Control-LLM-Llama3.1-8B-Math16
75
+ This is a fine-tuned model of Llama-3.1-8B-Instruct for mathematical tasks on OpenMath2 dataset.
76
+
77
+ ## Linked Paper
78
+ This model is associated with the paper: [Control-LLM](https://arxiv.org/abs/2501.10979).
79
+
80
+ ## Evaluation Results
81
+ Here is an overview of the evaluation results and findings:
82
+
83
+ ### Benchmark Results Table
84
+ The table below summarizes evaluation results across mathematical tasks and original capabilities.
85
+
86
+ | **Model** | **MH** | **M** | **G8K** | **M-Avg** | **ARC** | **GPQA** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
87
+ |-------------------|--------|--------|---------|-----------|---------|----------|---------|----------|-----------|-------------|
88
+ | Llama3.1-8B-Inst | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 56.3 |
89
+ | **Control LLM*** | 36.0 | 61.7 | **89.7**| 62.5 | 82.5 | 30.8 | **71.6**| 45.4 | **57.6** | **60.0** |
90
+
91
+ ---
92
+ ### Explanation:
93
+ - **MH**: MathHard
94
+ - **M**: Math
95
+ - **G8K**: GSM8K
96
+ - **M-Avg**: Math - Average across MathHard, Math, and GSM8K
97
+ - **ARC**: ARC benchmark
98
+ - **GPQA**: General knowledge QA
99
+ - **MLU**: MMLU (Massive Multitask Language Understanding)
100
+ - **MLUP**: MMLU Pro
101
+ - **O-Avg**: Original Capability - Average across ARC, GPQA, MMLU, and MLUP
102
+ - **Overall**: Combined average across all tasks
103
+
104
+ ### Catastrophic Forgetting on OpenMath
105
+ The following plot illustrates and compares catastrophic forgetting mitigation during training
106
+
107
+ ![Catastrophic Forgetting](plots/ControlLLM_CF_Plot_Math.png)
108
+
109
+ ### Alignment Result
110
+ The plot below highlights the alignment result of the model trained with Control LLM.
111
+
112
+ ![Alignment](plots/alignment_best.png)