File size: 4,074 Bytes
0a8b543
 
 
1be1e56
 
 
 
 
 
 
 
 
 
 
 
0a8b543
 
 
 
8acbc3f
 
 
 
 
 
0a8b543
 
53d1e77
1be1e56
0a8b543
 
8acbc3f
 
 
0dec138
 
0a8b543
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1be1e56
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
base_model:
- IntervitensInc/Mistral-Nemo-Base-2407-chatml
- MarinaraSpaghetti/NemoMix-Unleashed-12B
- inflatebot/MN-12B-Mag-Mell-R1
- LatitudeGames/Wayfarer-12B
- PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
- TheDrummer/UnslopNemo-12B-v4
- yuyouyu/Mistral-Nemo-BD-RP
- rAIfle/Questionable-MN-bf16
- romaingrx/red-teamer-mistral-nemo
- crestf411/MN-Slush
- aixonlab/Zinakha-12b
- benhaotang/nemo-math-science-philosophy-12B
- ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
library_name: transformers
tags:
- mergekit
- merge
- 12b
- chat
- roleplay
- creative-writing
- model-stock
license: apache-2.0
---
# wuriaee-12B-schizostock
> [*That.* is crazy.](https://youtu.be/zEWJ-JgVS7Q)

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

This is a merge of 14 models that I found interesting. I've downloaded them all and plan to make proper merges with them. Before I could make hypothetical good combinations, I thought it'd be funny to merge them all via model stock. I did give some thought to the order of the models in the config. More interesting models are at the top of the list in the full config and more stable ones are towards the bottom.
The results were interesting, indeed. Haven't tested it much at all, but the results were intriguing. 

Tenth model.

## Merge Details
### Merge Method

This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [IntervitensInc/Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml) as a base.

### Models Merged

The following models were included in the merge:
* [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B)
* [ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2)
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS](https://huggingface.co/DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS)
* [yuyouyu/Mistral-Nemo-BD-RP](https://huggingface.co/yuyouyu/Mistral-Nemo-BD-RP)
* [crestf411/MN-Slush](https://huggingface.co/crestf411/MN-Slush)
* [IntervitensInc/Mistral-Nemo-Base-2407-chatml](https://huggingface.co/IntervitensInc/Mistral-Nemo-Base-2407-chatml)
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
* [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b)
* [benhaotang/nemo-math-science-philosophy-12B](https://huggingface.co/benhaotang/nemo-math-science-philosophy-12B)
* [aixonlab/Zinakha-12b](https://huggingface.co/aixonlab/Zinakha-12b)
* [rAIfle/Questionable-MN-bf16](https://huggingface.co/rAIfle/Questionable-MN-bf16)
* [romaingrx/red-teamer-mistral-nemo](https://huggingface.co/romaingrx/red-teamer-mistral-nemo)

### Configuration

The following YAML configurations were used to produce this model:

```yaml
# Full Configuration
models:
# Next 4 models are p1; the base model, tokenizer, template is the same for each part (same as the final merge)
  - model: inflatebot/MN-12B-Mag-Mell-R1
  - model: LatitudeGames/Wayfarer-12B
  - model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
  - model: TheDrummer/UnslopNemo-12B-v4
# p2:
  - model: yuyouyu/Mistral-Nemo-BD-RP
  - model: rAIfle/Questionable-MN-bf16
  - model: romaingrx/red-teamer-mistral-nemo
  - model: MarinaraSpaghetti/NemoMix-Unleashed-12B
# p3:
  - model: crestf411/MN-Slush
  - model: aixonlab/Zinakha-12b
  - model: benhaotang/nemo-math-science-philosophy-12B
  - model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
merge_method: model_stock
dtype: bfloat16
chat_template: "chatml"
tokenizer:
  source: union
  
```

```yaml
# Final Model:
models:
  - model: p1
  - model: p2
  - model: p3
base_model: IntervitensInc/Mistral-Nemo-Base-2407-chatml
merge_method: model_stock
dtype: bfloat16
chat_template: "chatml"
tokenizer:
  source: union

```