final_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using CultriX/SeQwence-14B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: CultriX/SeQwence-14B
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 8]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.6973896126881656
  - layer_range: [0, 8]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.25536014932096784
  - layer_range: [0, 8]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.024099354110818955
  - layer_range: [0, 8]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.062255273414504236
  - layer_range: [0, 8]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.19842743525221093
- sources:
  - layer_range: [8, 16]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.16541251205918317
  - layer_range: [8, 16]
    model: CultriX/SeQwence-14B
    parameters:
      weight: -0.11758222851964711
  - layer_range: [8, 16]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.026110542928974606
  - layer_range: [8, 16]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.17351317150552764
  - layer_range: [8, 16]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.2189587409844403
- sources:
  - layer_range: [16, 24]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: -0.18585407879293625
  - layer_range: [16, 24]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.28979432739572986
  - layer_range: [16, 24]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.13321246350564858
  - layer_range: [16, 24]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: -0.07525163437282778
  - layer_range: [16, 24]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.09939146833918691
- sources:
  - layer_range: [24, 32]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.20535780306129478
  - layer_range: [24, 32]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.23689447247624298
  - layer_range: [24, 32]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.08595523000213551
  - layer_range: [24, 32]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.32843658569448686
  - layer_range: [24, 32]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.5660243716148874
- sources:
  - layer_range: [32, 40]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.4782495451944288
  - layer_range: [32, 40]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.04636896831126347
  - layer_range: [32, 40]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: -0.20847472991447114
  - layer_range: [32, 40]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: -0.13710751148654265
  - layer_range: [32, 40]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.04879517930226218
- sources:
  - layer_range: [40, 48]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.24947640644399857
  - layer_range: [40, 48]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.27995726695330514
  - layer_range: [40, 48]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.29376471224311385
  - layer_range: [40, 48]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.11668812856147562
  - layer_range: [40, 48]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.117720095241547

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.41
IFEval (0-Shot) 57.86
BBH (3-Shot) 46.53
MATH Lvl 5 (4-Shot) 21.60
GPQA (0-shot) 14.77
MuSR (0-shot) 17.55
MMLU-PRO (5-shot) 48.16
Downloads last month
6
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for CultriX/SeQwence-14Bv2

Evaluation results