Crystalcareai
commited on
Fix formatting
Browse files
README.md
CHANGED
@@ -30,4 +30,4 @@ Here are our internal benchmarks using the main branch of lm evaluation harness:
|
|
30 |
| BBH | 51.1 | 50.6 |
|
31 |
| GPQA | 31.2 | 29.02 |
|
32 |
|
33 |
-
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite/blob/main/eval.
|
|
|
30 |
| BBH | 51.1 | 50.6 |
|
31 |
| GPQA | 31.2 | 29.02 |
|
32 |
|
33 |
+
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite/blob/main/eval).
|