Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -76,3 +76,17 @@ The following hyperparameters were used during training:
|
|
76 |
- Pytorch 2.0.0+cu117
|
77 |
- Datasets 2.10.1
|
78 |
- Tokenizers 0.13.2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
- Pytorch 2.0.0+cu117
|
77 |
- Datasets 2.10.1
|
78 |
- Tokenizers 0.13.2
|
79 |
+
|
80 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
81 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HiTZ__alpaca-lora-65b-en-pt-es-ca)
|
82 |
+
|
83 |
+
| Metric | Value |
|
84 |
+
|-----------------------|---------------------------|
|
85 |
+
| Avg. | 59.31 |
|
86 |
+
| ARC (25-shot) | 65.02 |
|
87 |
+
| HellaSwag (10-shot) | 84.88 |
|
88 |
+
| MMLU (5-shot) | 62.19 |
|
89 |
+
| TruthfulQA (0-shot) | 46.06 |
|
90 |
+
| Winogrande (5-shot) | 80.51 |
|
91 |
+
| GSM8K (5-shot) | 26.69 |
|
92 |
+
| DROP (3-shot) | 49.84 |
|