File size: 4,500 Bytes
d7ee25c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
base_model:
- Khetterman/AbominationScience-12B-v4
- LatitudeGames/Wayfarer-12B
- mergekit-community/MN-Sappho-n2-12B
- Nitral-Archive/Diogenes-12B
- mergekit-community/MN-Ephemeros-12B
- PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
- jtatman/mistral_nemo_12b_reasoning_psychology_lora
- PygmalionAI/Eleusis-12B
- ToastyPigeon/Sto-vo-kor-12B
- mistralai/Mistral-Nemo-Base-2407
- mergekit-community/MN-Sappho-j-12B
- jtatman/mistral_nemo_12b_reasoning_psychology_lora
- mistralai/Mistral-Nemo-Instruct-2407
- mergekit-community/MN-Sappho-g3-12B
- yamatazen/EtherealAurora-12B
- nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
- DavidAU/MN-Dark-Planet-TITAN-12B
- HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
- mergekit-community/MN-Sappho-n-12B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yamatazen/EtherealAurora-12B](https://huggingface.co/yamatazen/EtherealAurora-12B) as a base.
### Models Merged
The following models were included in the merge:
* [Khetterman/AbominationScience-12B-v4](https://huggingface.co/Khetterman/AbominationScience-12B-v4)
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
* [mergekit-community/MN-Sappho-n2-12B](https://huggingface.co/mergekit-community/MN-Sappho-n2-12B)
* [Nitral-Archive/Diogenes-12B](https://huggingface.co/Nitral-Archive/Diogenes-12B)
* [mergekit-community/MN-Ephemeros-12B](https://huggingface.co/mergekit-community/MN-Ephemeros-12B)
* [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b) + [jtatman/mistral_nemo_12b_reasoning_psychology_lora](https://huggingface.co/jtatman/mistral_nemo_12b_reasoning_psychology_lora)
* [PygmalionAI/Eleusis-12B](https://huggingface.co/PygmalionAI/Eleusis-12B)
* [ToastyPigeon/Sto-vo-kor-12B](https://huggingface.co/ToastyPigeon/Sto-vo-kor-12B)
* [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)
* [mergekit-community/MN-Sappho-j-12B](https://huggingface.co/mergekit-community/MN-Sappho-j-12B) + [jtatman/mistral_nemo_12b_reasoning_psychology_lora](https://huggingface.co/jtatman/mistral_nemo_12b_reasoning_psychology_lora)
* [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)
* [mergekit-community/MN-Sappho-g3-12B](https://huggingface.co/mergekit-community/MN-Sappho-g3-12B)
* [nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B)
* [DavidAU/MN-Dark-Planet-TITAN-12B](https://huggingface.co/DavidAU/MN-Dark-Planet-TITAN-12B)
* [HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407](https://huggingface.co/HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407)
* [mergekit-community/MN-Sappho-n-12B](https://huggingface.co/mergekit-community/MN-Sappho-n-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
out_dtype: bfloat16
merge_method: model_stock
base_model: yamatazen/EtherealAurora-12B
models:
- model: DavidAU/MN-Dark-Planet-TITAN-12B
- model: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407
parameters:
weight: 0.7
- model: Khetterman/AbominationScience-12B-v4
- model: LatitudeGames/Wayfarer-12B
- model: mergekit-community/MN-Sappho-g3-12B
- model: mergekit-community/MN-Sappho-j-12B+jtatman/mistral_nemo_12b_reasoning_psychology_lora
parameters:
weight: 0.7
- model: mergekit-community/MN-Sappho-n-12B
parameters:
weight: 0.5
- model: mergekit-community/MN-Sappho-n2-12B
parameters:
weight: 0.8
- model: mergekit-community/MN-Ephemeros-12B
parameters:
weight: 1.2
- model: mistralai/Mistral-Nemo-Base-2407
parameters:
weight: 0.8
- model: mistralai/Mistral-Nemo-Instruct-2407
parameters:
weight: 0.5
- model: Nitral-Archive/Diogenes-12B
- model: nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B
- model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b+jtatman/mistral_nemo_12b_reasoning_psychology_lora
parameters:
weight: 0.8
- model: PygmalionAI/Eleusis-12B
parameters:
weight: 0.8
- model: ToastyPigeon/Sto-vo-kor-12B
parameters:
weight: 0.7
- model: yamatazen/EtherealAurora-12B
parameters:
weight: 0.01
```
|