Update README.md
Browse files
README.md
CHANGED
@@ -51,17 +51,19 @@ pipeline_tag: text-generation
|
|
51 |
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
|
52 |
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
53 |
|
|
|
54 |
### Main Results
|
55 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
56 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
|
|
57 |
| Llama-2-70b-instruct-1024 (***Ours***, ***Local Reproduction***) | **72.0** | **70.7** | **87.4** | **69.3** | **60.7** |
|
58 |
-
| llama-65b-instruct (
|
59 |
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 |
|
60 |
-
| llama-30b-instruct-2048 (
|
61 |
-
|
|
62 |
-
| llama-30b-instruct (
|
|
|
63 |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
|
64 |
-
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
|
65 |
|
66 |
### Scripts
|
67 |
- Prepare evaluation environments:
|
|
|
51 |
We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
|
52 |
We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
|
53 |
|
54 |
+
|
55 |
### Main Results
|
56 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
57 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
58 |
+
| Llama-2-70b-instruct-v2 (Ours, Local Reproduction) | 72.7 | 71.6 | 87.7 | 69.7 | 61.6 |
|
59 |
| Llama-2-70b-instruct-1024 (***Ours***, ***Local Reproduction***) | **72.0** | **70.7** | **87.4** | **69.3** | **60.7** |
|
60 |
+
| llama-65b-instruct (Ours, Local Reproduction) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 |
|
61 |
| Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 |
|
62 |
+
| llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
|
63 |
+
| llama-30b-instruct-2048 (Ours, Local Reproduction) | 67.0 | 64.9 | 85.0 | 61.9 | 56.0 |
|
64 |
+
| llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
|
65 |
+
| llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 |
|
66 |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
|
|
|
67 |
|
68 |
### Scripts
|
69 |
- Prepare evaluation environments:
|