I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
magnum-v1-72b 8.0bpw h8 EXL2
Includes measurement.json file for further quantization
Original Model: https://huggingface.co/anthracite-org/magnum-v1-72b
Original Model Card
This is the first in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen-2 72B Instruct.
Prompting
Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
Credits
This model has been a team effort, and the credits goes to all members of Anthracite.
We'd also like to thank Kearm for sponsoring the compute needed to train this model.
Training
The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x AMD Instinct™ MI300X Accelerators for the full-parameter fine-tuning of the model.
Safety
...
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 42.17 |
IFEval (0-Shot) | 76.06 |
BBH (3-Shot) | 57.65 |
MATH Lvl 5 (4-Shot) | 35.27 |
GPQA (0-shot) | 18.79 |
MuSR (0-shot) | 15.62 |
MMLU-PRO (5-shot) | 49.64 |
- Downloads last month
- 4
Model tree for FuturisticVibes/magnum-v1-72b-8.0bpw-h8-exl2
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard76.060
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard57.650
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard35.270
- acc_norm on GPQA (0-shot)Open LLM Leaderboard18.790
- acc_norm on MuSR (0-shot)Open LLM Leaderboard15.620
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard49.640