test001a.1 / mergekit_config.yml
MrRobotoAI's picture
Upload folder using huggingface_hub
d92e3a8 verified
raw
history blame contribute delete
No virus
2.15 kB
### This the config.yml for ABC_Books/test001 ###
models:
### Models that contribute a large 128K context window ###
- model: CallComply/zephyr-7b-beta-128k
parameters:
weight: 0.1154
density: 0.9
- model: Nitral-Archive/HerculeanSea-7b-128k
parameters:
weight: 0.1154
density: 0.9
- model: NousResearch/Yarn-Mistral-7b-128k
parameters:
weight: 0.1154
density: 0.9
### Models with finetune training on occult knownledge ###
- model: teknium/llama-deus-7b-v3-lora-merged
parameters:
weight: 0.0769
density: 0.9
- model: teknium/Hermes-Trismegistus-Mistral-7B
parameters:
weight: 0.0769
density: 0.9
- model: alexandrabenamar/Mistral-7B-Instruct-v0.2-Magic
parameters:
weight: 0.0769
density: 0.9
- model: tarotscientist/llama-2-7b-tarotreader
parameters:
weight: 0.0769
density: 0.9
- model: teknium/Mistral-Trismegistus-7B
parameters:
weight: 0.0769
density: 0.9
### Talkative model with a large context window ###
- model: Norquinal/Mistral-7B-storywriter
parameters:
weight: 0.0769
density: 0.9
### Models with finetune training to be uncensored use some crass diction ###
- model: Undi95/BigL-7B
parameters:
weight: 0.0384
density: 0.9
- model: Undi95/LewdMistral-7B-0.2
parameters:
weight: 0.0385
density: 0.9
- model: Undi95/MistRP-Dolphin-7B
parameters:
weight: 0.0385
density: 0.9
- model: Undi95/Mistral-ClaudeLimaRP-v3-7B
parameters:
weight: 0.0385
density: 0.9
- model: Undi95/Toppy-M-7B
parameters:
weight: 0.0385
density: 0.9
### The use of DARES has been shown to “Densify” standard model lending to a more robust model when paired with a high “density:” numbers ###
merge_method: dare_linear
### This model best exemplifies the closest match to all of the features needed in the final model ###
base_model: MrRobotoAI/Hathor-v4.1
parameters:
### When “densifing” models the model size tends to grow without normalize
normalize: true
dtype: float16