Adding Evaluation Results
#2
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -181,3 +181,17 @@ This is the **Full-Weight** of WizardLM-13B V1.1 model.
|
|
181 |
- π₯π₯π₯ [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
182 |
|
183 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
- π₯π₯π₯ [7/7/2023] The **WizardLM-13B-V1.1** achieves **6.74** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **86.32%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **99.3%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
182 |
|
183 |
|
184 |
+
|
185 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
186 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__WizardLM-13B-V1-1-SuperHOT-8K-fp16)
|
187 |
+
|
188 |
+
| Metric | Value |
|
189 |
+
|-----------------------|---------------------------|
|
190 |
+
| Avg. | 49.92 |
|
191 |
+
| ARC (25-shot) | 58.62 |
|
192 |
+
| HellaSwag (10-shot) | 81.07 |
|
193 |
+
| MMLU (5-shot) | 48.32 |
|
194 |
+
| TruthfulQA (0-shot) | 54.19 |
|
195 |
+
| Winogrande (5-shot) | 76.01 |
|
196 |
+
| GSM8K (5-shot) | 0.76 |
|
197 |
+
| DROP (3-shot) | 30.46 |
|