metadata
base_model:
- allura-org/L3.1-8b-RP-Ink
- DreadPoor/Aspire-8B-model_stock
- NousResearch/Hermes-3-Llama-3.1-8B
- mlabonne/NeuralDaredevil-8B-abliterated
library_name: transformers
tags:
- mergekit
- merge
merge-new-2
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SCE merge method using NousResearch/Hermes-3-Llama-3.1-8B as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
# SCE (Select, Calculate, Erase) merge configuration
merge_method: sce
base_model: NousResearch/Hermes-3-Llama-3.1-8B
models:
- model: allura-org/L3.1-8b-RP-Ink
parameters:
weight: 1.0
- model: DreadPoor/Aspire-8B-model_stock
parameters:
weight: 1.0
#- model: TroyDoesAI/BlackSheep-X-Dolphin
# parameters:
# weight: 1.0
- model: mlabonne/NeuralDaredevil-8B-abliterated
parameters:
weight: 1.0
#- model: SicariusSicariiStuff/Wingless_Imp_8B
# parameters:
# weight: 1.0
#- model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
# parameters:
# weight: 1.0
parameters:
select_topk: 0.4
density: 0.7
lambda: 1.0
tokenizer:
source: "union"
dtype: float16
chat_template: "chatml"