NeoSage-12B
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Task Arithmetic merge method using ../EsotericSage-12B-frank as a base.
Models Merged
The following models were included in the merge:
- ../Neona-12B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ../EsotericSage-12B-frank # good smarts & prose
parameters:
weight: 1.0
- model: ../Neona-12B # decent coding & uncensor & roleplay
parameters:
weight: [0.25, 0.30, 0.75, 0.30, 0.25]
merge_method: task_arithmetic
base_model: ../EsotericSage-12B-frank # good smarts & prose
normalize: true
dtype: float16
chat_template: "chatml"
tokenizer:
source: "base"
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support