Insanity / README.md
Nohobby's picture
Upload folder using huggingface_hub
a0fa329 verified
|
raw
history blame
1.82 kB
metadata
base_model:
  - v000000/NM-12B-Lyris-dev-3
  - ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
library_name: transformers
tags:
  - mergekit
  - merge

insanity

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the della_linear merge method using ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
dtype: bfloat16
merge_method: della_linear
parameters:
  epsilon: 0.04
  int8_mask: 1.0
  lambda: 1.05
  normalize: 0.0
  rescale: 1.0
slices:
- sources:
  - layer_range: [0, 40]
    model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1
    parameters:
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
      weight: [0.2, 0.3, 0.2, 0.3, 0.2]
  - layer_range: [0, 40]
    model: Insanity/chatml
    parameters:
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
      weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
  - layer_range: [0, 40]
    model: Insanity/uncen
    parameters:
      density: [0.6, 0.4, 0.5, 0.4, 0.6]
      weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
  - layer_range: [0, 40]
    model: Insanity/conv
    parameters:
      density: [0.7]
      weight: [0.208, 0.139, 0.139, 0.139, 0.208]
  - layer_range: [0, 40]
    model: v000000/NM-12B-Lyris-dev-3
    parameters:
      density: [0.45, 0.55, 0.45, 0.55, 0.45]
      weight: [0.33]
tokenizer_source: base