File size: 1,574 Bytes
df2cf77 9ea7923 df2cf77 a3f7379 ef467f5 a3f7379 b97419e 9ea7923 fa004b6 905694d f9105d9 9ea7923 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: apache-2.0
language:
- en
tags:
- merge
---
![image/png](https://i.ibb.co/VYkXDnn/icon.png)
Experimental merge, attempt to gain the roleplaying capabilities of Undi95/Toppy-M-7B and SanjiWatsuki/Loyal-Macaroni-Maid-7B while maintaining the context and capabilities of the original mistralai/Mistral-7B-Instruct-v0.2
The idea was that by combining two models with one self-merge, it would be possible to make each layer more unique, and therefore make the model “smarter” than a regular self-merge.
[Exl2, 6.0 bpw](https://huggingface.co/xxx777xxxASD/10.7B-Loyal-Mistral-Maid-32k-v0.2-A-exl2-bpw-6.0)
### 10.7B Loyal Mistral Maid v0.2
```
slices:
- sources:
- model: Mistral_Instruct_SelfMerge
layer_range: [0, 48]
- model: Loyal_Toppy_Maid
layer_range: [0, 48]
merge_method: slerp
base_model: Mistral_Instruct_SelfMerge
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
### Loyal Toppy Maid
```
slices:
- sources:
- model: Undi95/Toppy-M-7B
layer_range: [0, 24]
- sources:
- model: SanjiWatsuki/Loyal-Macaroni-Maid-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
### Mistral_Instruct_SelfMerge
```
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
``` |