Update README.md
Browse files
README.md
CHANGED
@@ -148,8 +148,7 @@ See [Report for miscii-1020](https://api.wandb.ai/links/flandrelabs-carnegie-mel
|
|
148 |
|
149 |
-----
|
150 |
|
151 |
-
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
152 |
-
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sthenno-com__miscii-14b-1028).
|
153 |
|
154 |
| Metric |Value|
|
155 |
|-------------------|----:|
|
@@ -161,9 +160,10 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
161 |
|MuSR (0-shot) |12.00|
|
162 |
|MMLU-PRO (5-shot) |46.14|
|
163 |
|
164 |
-
|
165 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
166 |
-
|
|
|
167 |
|
168 |
| Metric |Value|
|
169 |
|-------------------|----:|
|
@@ -175,3 +175,4 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
|
|
175 |
|MuSR (0-shot) |12.00|
|
176 |
|MMLU-PRO (5-shot) |46.14|
|
177 |
|
|
|
|
148 |
|
149 |
-----
|
150 |
|
151 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) (0215)
|
|
|
152 |
|
153 |
| Metric |Value|
|
154 |
|-------------------|----:|
|
|
|
160 |
|MuSR (0-shot) |12.00|
|
161 |
|MMLU-PRO (5-shot) |46.14|
|
162 |
|
163 |
+
|
164 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
165 |
+
|
166 |
+
Refined:
|
167 |
|
168 |
| Metric |Value|
|
169 |
|-------------------|----:|
|
|
|
175 |
|MuSR (0-shot) |12.00|
|
176 |
|MMLU-PRO (5-shot) |46.14|
|
177 |
|
178 |
+
$$\large{\text{There's nothing more to Show}}$$
|