Adding Evaluation Results
#3
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -64,4 +64,17 @@ If you use the data or code in this repo, please cite the repo.
|
|
64 |
howpublished = {\url{https://github.com/juyongjiang/CodeUp}},
|
65 |
}
|
66 |
```
|
67 |
-
Naturally, you should also cite the original LLaMA V1 [1] & V2 paper [2], and the Self-Instruct paper [3], and the LoRA paper [4], and the [Stanford Alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), and [Alpaca-LoRA repo](https://github.com/tloen/alpaca-lora), and [Code Alpaca repo](https://github.com/sahil280114/codealpaca), and [PEFT](https://github.com/huggingface/peft).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
howpublished = {\url{https://github.com/juyongjiang/CodeUp}},
|
65 |
}
|
66 |
```
|
67 |
+
Naturally, you should also cite the original LLaMA V1 [1] & V2 paper [2], and the Self-Instruct paper [3], and the LoRA paper [4], and the [Stanford Alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), and [Alpaca-LoRA repo](https://github.com/tloen/alpaca-lora), and [Code Alpaca repo](https://github.com/sahil280114/codealpaca), and [PEFT](https://github.com/huggingface/peft).
|
68 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
69 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_deepse__CodeUp-Llama-2-13b-chat-hf)
|
70 |
+
|
71 |
+
| Metric | Value |
|
72 |
+
|-----------------------|---------------------------|
|
73 |
+
| Avg. | 50.48 |
|
74 |
+
| ARC (25-shot) | 59.04 |
|
75 |
+
| HellaSwag (10-shot) | 81.93 |
|
76 |
+
| MMLU (5-shot) | 54.63 |
|
77 |
+
| TruthfulQA (0-shot) | 44.12 |
|
78 |
+
| Winogrande (5-shot) | 74.51 |
|
79 |
+
| GSM8K (5-shot) | 15.24 |
|
80 |
+
| DROP (3-shot) | 23.87 |
|