merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: fblgit/cybertron-v4-qw7B-MGS
- model: Tsunami-th/Tsunami-0.5x-7B-Instruct
merge_method: slerp
base_model: fblgit/cybertron-v4-qw7B-MGS
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.42 |
IFEval (0-Shot) | 52.94 |
BBH (3-Shot) | 37.44 |
MATH Lvl 5 (4-Shot) | 31.87 |
GPQA (0-shot) | 8.39 |
MuSR (0-shot) | 12.82 |
MMLU-PRO (5-shot) | 39.06 |
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for allknowingroger/Qwenslerp2-7B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard52.940
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard37.440
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard31.870
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.390
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.820
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard39.060