--- base_model: - allura-org/L3.1-8b-RP-Ink - DreadPoor/Aspire-8B-model_stock - NousResearch/Hermes-3-Llama-3.1-8B - mlabonne/NeuralDaredevil-8B-abliterated library_name: transformers tags: - mergekit - merge --- # merge-new-2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) as a base. ### Models Merged The following models were included in the merge: * [allura-org/L3.1-8b-RP-Ink](https://huggingface.co/allura-org/L3.1-8b-RP-Ink) * [DreadPoor/Aspire-8B-model_stock](https://huggingface.co/DreadPoor/Aspire-8B-model_stock) * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml # SCE (Select, Calculate, Erase) merge configuration merge_method: sce base_model: NousResearch/Hermes-3-Llama-3.1-8B models: - model: allura-org/L3.1-8b-RP-Ink parameters: weight: 1.0 - model: DreadPoor/Aspire-8B-model_stock parameters: weight: 1.0 #- model: TroyDoesAI/BlackSheep-X-Dolphin # parameters: # weight: 1.0 - model: mlabonne/NeuralDaredevil-8B-abliterated parameters: weight: 1.0 #- model: SicariusSicariiStuff/Wingless_Imp_8B # parameters: # weight: 1.0 #- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B # parameters: # weight: 1.0 parameters: select_topk: 0.4 density: 0.7 lambda: 1.0 tokenizer: source: "union" dtype: float16 chat_template: "chatml" ```