xxx777xxxASD's picture
Update README.md
b97419e verified
---
license: apache-2.0
language:
- en
tags:
- merge
---
![image/png](https://i.ibb.co/VYkXDnn/icon.png)
Experimental merge, attempt to gain the roleplaying capabilities of Undi95/Toppy-M-7B and SanjiWatsuki/Loyal-Macaroni-Maid-7B while maintaining the context and capabilities of the original mistralai/Mistral-7B-Instruct-v0.2
The idea was that by combining two models with one self-merge, it would be possible to make each layer more unique, and therefore make the model “smarter” than a regular self-merge.
[Exl2, 6.0 bpw](https://huggingface.co/xxx777xxxASD/10.7B-Loyal-Mistral-Maid-32k-v0.2-A-exl2-bpw-6.0)
### 10.7B Loyal Mistral Maid v0.2
```
slices:
- sources:
- model: Mistral_Instruct_SelfMerge
layer_range: [0, 48]
- model: Loyal_Toppy_Maid
layer_range: [0, 48]
merge_method: slerp
base_model: Mistral_Instruct_SelfMerge
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
### Loyal Toppy Maid
```
slices:
- sources:
- model: Undi95/Toppy-M-7B
layer_range: [0, 24]
- sources:
- model: SanjiWatsuki/Loyal-Macaroni-Maid-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
### Mistral_Instruct_SelfMerge
```
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```