--- base_model: - nbeerbower/mistral-nemo-gutenberg-12B - TheDrummer/UnslopNemo-12B-v3 - ReadyArt/Forgotten-Safeword-12B-V3.0 - Trappu/Magnum-Picaro-0.7-v2-12b - ReadyArt/The-Omega-Directive-M-12B-v1.0 library_name: transformers tags: - mergekit - merge --- Protocol Mascot # output-model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/The-Omega-Directive-M-12B-v1.0](https://huggingface.co/ReadyArt/The-Omega-Directive-M-12B-v1.0) as a base. ### Models Merged The following models were included in the merge: * [nbeerbower/mistral-nemo-gutenberg-12B](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B) * [TheDrummer/UnslopNemo-12B-v3](https://huggingface.co/TheDrummer/UnslopNemo-12B-v3) * [ReadyArt/Forgotten-Safeword-12B-V3.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-12B-V3.0) * [Trappu/Magnum-Picaro-0.7-v2-12b](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: ReadyArt/The-Omega-Directive-M-12B-v1.0 models: - model: ReadyArt/The-Omega-Directive-M-12B-v1.0 parameters: weight: 0.2 - model: ReadyArt/Forgotten-Safeword-12B-V3.0 parameters: weight: 0.2 - model: TheDrummer/UnslopNemo-12B-v3 parameters: weight: 0.2 - model: Trappu/Magnum-Picaro-0.7-v2-12b parameters: weight: 0.2 - model: nbeerbower/mistral-nemo-gutenberg-12B parameters: weight: 0.2 parameters: density: 0.3 low_memory: true tokenizer: source: union chat_template: auto ```