Adventien-GPTJ / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
a0570e7
|
raw
history blame
654 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.36
ARC (25-shot) 42.49
HellaSwag (10-shot) 69.21
MMLU (5-shot) 25.4
TruthfulQA (0-shot) 36.95
Winogrande (5-shot) 60.22
GSM8K (5-shot) 1.59
DROP (3-shot) 4.69