--- base_model: - nlpguy/StableProse - TheDrummer/Rocinante-12B-v1 - Sao10K/MN-12B-Lyra-v2a1 - unsloth/Mistral-Nemo-Base-2407 - nbeerbower/mistral-nemo-gutenberg-12B-v3 - unsloth/Mistral-Nemo-Instruct-2407 library_name: transformers tags: - mergekit - merge --- # Carasique-v0.1 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using [unsloth/Mistral-Nemo-Base-2407](https://huggingface.co/unsloth/Mistral-Nemo-Base-2407) as a base. ### Models Merged The following models were included in the merge: * [nlpguy/StableProse](https://huggingface.co/nlpguy/StableProse) * [TheDrummer/Rocinante-12B-v1](https://huggingface.co/TheDrummer/Rocinante-12B-v1) * [Sao10K/MN-12B-Lyra-v2a1](https://huggingface.co/Sao10K/MN-12B-Lyra-v2a1) * [nbeerbower/mistral-nemo-gutenberg-12B-v3](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v3) * [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: unsloth/Mistral-Nemo-Instruct-2407 parameters: weight: 0.1 density: 0.4 - model: nlpguy/StableProse parameters: weight: 0.12 density: 0.5 - model: Sao10K/MN-12B-Lyra-v2a1 parameters: weight: 0.2 density: 0.6 - model: TheDrummer/Rocinante-12B-v1 parameters: weight: 0.25 density: 0.7 - model: nbeerbower/mistral-nemo-gutenberg-12B-v3 parameters: weight: 0.33 density: 0.8 merge_method: della_linear base_model: unsloth/Mistral-Nemo-Base-2407 parameters: epsilon: 0.05 lambda: 1 dtype: bfloat16 ```