--- base_model: - jdqwoi/TooManyMixRolePlay-7B-Story_V3.5 - MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp - MrRobotoAI/Test001a.2 - ajibawa-2023/General-Stories-Mistral-7B - KnutJaegersberg/Mistral-7B-EssayWriter - luozhuanggary/GOAT-v0.2-Mistral-7B-Claude - tdh87/StoryTeller7b-meh - ajibawa-2023/Young-Children-Storyteller-Mistral-7B - scribis/Fantastica-7b-Instruct-0.2-Italian_merged - kasper52786/StoryWeaver-7b-Instruct-v0.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [MrRobotoAI/Test001a.2](https://huggingface.co/MrRobotoAI/Test001a.2) as a base. ### Models Merged The following models were included in the merge: * [jdqwoi/TooManyMixRolePlay-7B-Story_V3.5](https://huggingface.co/jdqwoi/TooManyMixRolePlay-7B-Story_V3.5) * [MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp) * [ajibawa-2023/General-Stories-Mistral-7B](https://huggingface.co/ajibawa-2023/General-Stories-Mistral-7B) * [KnutJaegersberg/Mistral-7B-EssayWriter](https://huggingface.co/KnutJaegersberg/Mistral-7B-EssayWriter) * [luozhuanggary/GOAT-v0.2-Mistral-7B-Claude](https://huggingface.co/luozhuanggary/GOAT-v0.2-Mistral-7B-Claude) * [tdh87/StoryTeller7b-meh](https://huggingface.co/tdh87/StoryTeller7b-meh) * [ajibawa-2023/Young-Children-Storyteller-Mistral-7B](https://huggingface.co/ajibawa-2023/Young-Children-Storyteller-Mistral-7B) * [scribis/Fantastica-7b-Instruct-0.2-Italian_merged](https://huggingface.co/scribis/Fantastica-7b-Instruct-0.2-Italian_merged) * [kasper52786/StoryWeaver-7b-Instruct-v0.1](https://huggingface.co/kasper52786/StoryWeaver-7b-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml ### This the config.yml for ABC_Books/test002 ### models: - model: KnutJaegersberg/Mistral-7B-EssayWriter parameters: weight: 0.1 density: 0.9 - model: MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp parameters: weight: 0.1 density: 0.9 - model: ajibawa-2023/General-Stories-Mistral-7B parameters: weight: 0.1 density: 0.9 - model: ajibawa-2023/Young-Children-Storyteller-Mistral-7B parameters: weight: 0.1 density: 0.9 - model: jdqwoi/TooManyMixRolePlay-7B-Story_V3.5 parameters: weight: 0.1 density: 0.9 - model: kasper52786/StoryWeaver-7b-Instruct-v0.1 parameters: weight: 0.1 density: 0.9 - model: luozhuanggary/GOAT-v0.2-Mistral-7B-Claude parameters: weight: 0.1 density: 0.9 - model: scribis/Fantastica-7b-Instruct-0.2-Italian_merged parameters: weight: 0.1 density: 0.9 - model: tdh87/StoryTeller7b-meh parameters: weight: 0.1 density: 0.9 - model: MrRobotoAI/Test001a.2 parameters: weight: 0.1 density: 0.9 merge_method: dare_linear ### Now this model best exemplifies the closest match to all of the features needed in the final model. So it now becomes the base model for merges ### base_model: MrRobotoAI/Test001a.2 parameters: normalize: true dtype: float16 ```