Model Description
The model you’re using is based on LilRg/ECE-1B-merge-PRYMMAL. Through specialized fine-tuning, this model has been trained to become highly proficient in solving complex problems. By using a dataset specifically focused on instructions (mosaicml/instruct-v3), it has gained the ability to handle advanced reasoning.
- Developed by: Youri Lalain (@Youlln)
- Organization: ECE engineering school
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 11.80 |
IFEval (0-Shot) | 21.44 |
BBH (3-Shot) | 16.19 |
MATH Lvl 5 (4-Shot) | 6.12 |
GPQA (0-shot) | 3.80 |
MuSR (0-shot) | 3.87 |
MMLU-PRO (5-shot) | 19.36 |
- Downloads last month
- 90
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Youlln/ECE-PRYMMAL1B-FT-V1
Dataset used to train Youlln/ECE-PRYMMAL1B-FT-V1
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard21.440
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard16.190
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard6.120
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.800
- acc_norm on MuSR (0-shot)Open LLM Leaderboard3.870
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard19.360