Edit model card

very wip experiment.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 36.2
ARC (25-shot) 39.51
HellaSwag (10-shot) 33.9
MMLU (5-shot) 38.49
TruthfulQA (0-shot) 40.94
Winogrande (5-shot) 74.35
GSM8K (5-shot) 20.77
DROP (3-shot) 5.43
Downloads last month
697
Safetensors
Model size
33.7B params
Tensor type
FP16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Datasets used to train chargoddard/llama-2-34b-uncode

Spaces using chargoddard/llama-2-34b-uncode 20