Gemma 2B Zephyr SFT
The Zephyr SFT recipe applied on top of Gemma 2B
Model description
- Model type: A 2.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily English
- Finetuned from model: google/gemma-7b
Recipe
We trained using the alignment handbook recipe and logging to W&B
Visit the W&B workspace here
License
This model has the same license as the original Gemma model collection
Compute provided by Lambda Labs - 8xA100 80GB node
- Around 2 hours to train
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 47.18 |
AI2 Reasoning Challenge (25-Shot) | 49.74 |
HellaSwag (10-Shot) | 72.38 |
MMLU (5-Shot) | 41.37 |
TruthfulQA (0-shot) | 34.42 |
Winogrande (5-shot) | 66.93 |
GSM8k (5-shot) | 18.27 |
- Downloads last month
- 127
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for wandb/gemma-2b-zephyr-sft
Base model
google/gemma-2bDataset used to train wandb/gemma-2b-zephyr-sft
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard49.740
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard72.380
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard41.370
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard34.420
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard66.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard18.270