--- base_model: - Khetterman/AbominationScience-12B-v4 - LatitudeGames/Wayfarer-12B - mergekit-community/MN-Sappho-n2-12B - Nitral-Archive/Diogenes-12B - mergekit-community/MN-Ephemeros-12B - PocketDoc/Dans-PersonalityEngine-V1.1.0-12b - jtatman/mistral_nemo_12b_reasoning_psychology_lora - PygmalionAI/Eleusis-12B - ToastyPigeon/Sto-vo-kor-12B - mistralai/Mistral-Nemo-Base-2407 - mergekit-community/MN-Sappho-j-12B - jtatman/mistral_nemo_12b_reasoning_psychology_lora - mistralai/Mistral-Nemo-Instruct-2407 - mergekit-community/MN-Sappho-g3-12B - yamatazen/EtherealAurora-12B - nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B - DavidAU/MN-Dark-Planet-TITAN-12B - HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 - mergekit-community/MN-Sappho-n-12B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [yamatazen/EtherealAurora-12B](https://huggingface.co/yamatazen/EtherealAurora-12B) as a base. ### Models Merged The following models were included in the merge: * [Khetterman/AbominationScience-12B-v4](https://huggingface.co/Khetterman/AbominationScience-12B-v4) * [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B) * [mergekit-community/MN-Sappho-n2-12B](https://huggingface.co/mergekit-community/MN-Sappho-n2-12B) * [Nitral-Archive/Diogenes-12B](https://huggingface.co/Nitral-Archive/Diogenes-12B) * [mergekit-community/MN-Ephemeros-12B](https://huggingface.co/mergekit-community/MN-Ephemeros-12B) * [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b) + [jtatman/mistral_nemo_12b_reasoning_psychology_lora](https://huggingface.co/jtatman/mistral_nemo_12b_reasoning_psychology_lora) * [PygmalionAI/Eleusis-12B](https://huggingface.co/PygmalionAI/Eleusis-12B) * [ToastyPigeon/Sto-vo-kor-12B](https://huggingface.co/ToastyPigeon/Sto-vo-kor-12B) * [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) * [mergekit-community/MN-Sappho-j-12B](https://huggingface.co/mergekit-community/MN-Sappho-j-12B) + [jtatman/mistral_nemo_12b_reasoning_psychology_lora](https://huggingface.co/jtatman/mistral_nemo_12b_reasoning_psychology_lora) * [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) * [mergekit-community/MN-Sappho-g3-12B](https://huggingface.co/mergekit-community/MN-Sappho-g3-12B) * [nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B](https://huggingface.co/nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B) * [DavidAU/MN-Dark-Planet-TITAN-12B](https://huggingface.co/DavidAU/MN-Dark-Planet-TITAN-12B) * [HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407](https://huggingface.co/HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407) * [mergekit-community/MN-Sappho-n-12B](https://huggingface.co/mergekit-community/MN-Sappho-n-12B) ### Configuration The following YAML configuration was used to produce this model: ```yaml out_dtype: bfloat16 merge_method: model_stock base_model: yamatazen/EtherealAurora-12B models: - model: DavidAU/MN-Dark-Planet-TITAN-12B - model: HumanLLMs/Human-Like-Mistral-Nemo-Instruct-2407 parameters: weight: 0.7 - model: Khetterman/AbominationScience-12B-v4 - model: LatitudeGames/Wayfarer-12B - model: mergekit-community/MN-Sappho-g3-12B - model: mergekit-community/MN-Sappho-j-12B+jtatman/mistral_nemo_12b_reasoning_psychology_lora parameters: weight: 0.7 - model: mergekit-community/MN-Sappho-n-12B parameters: weight: 0.5 - model: mergekit-community/MN-Sappho-n2-12B parameters: weight: 0.8 - model: mergekit-community/MN-Ephemeros-12B parameters: weight: 1.2 - model: mistralai/Mistral-Nemo-Base-2407 parameters: weight: 0.8 - model: mistralai/Mistral-Nemo-Instruct-2407 parameters: weight: 0.5 - model: Nitral-Archive/Diogenes-12B - model: nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B - model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b+jtatman/mistral_nemo_12b_reasoning_psychology_lora parameters: weight: 0.8 - model: PygmalionAI/Eleusis-12B parameters: weight: 0.8 - model: ToastyPigeon/Sto-vo-kor-12B parameters: weight: 0.7 - model: yamatazen/EtherealAurora-12B parameters: weight: 0.01 ```