license: apache-2.0 | |
tags: | |
- merge | |
- mergekit | |
- mlabonne/AlphaMonarch-7B | |
- bardsai/jaskier-7b-dpo-v5.6 | |
- macadeliccc/MBX-7B-v3-DPO | |
# pastiche-crown-clown-7B-dare | |
pastiche-crown-clown-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit): | |
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) | |
* [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) | |
* [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO) | |
See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method. | |
## 🧩 Configuration | |
```yaml | |
models: | |
- model: mlabonne/NeuralMonarch-7B | |
# No parameters necessary for base model | |
- model: mlabonne/AlphaMonarch-7B | |
parameters: | |
density: 0.53 | |
weight: 0.4 | |
- model: bardsai/jaskier-7b-dpo-v5.6 | |
parameters: | |
density: 0.53 | |
weight: 0.3 | |
- model: macadeliccc/MBX-7B-v3-DPO | |
parameters: | |
density: 0.53 | |
weight: 0.3 | |
merge_method: dare_ties | |
base_model: mlabonne/NeuralMonarch-7B | |
parameters: | |
int8_mask: true | |
dtype: bfloat16 | |
``` |