merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-14B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.35      # Strong performance in GPQA, MUSR, and MMLU-PRO
      density: 0.6      # Retain 60% of significant parameters
  - model: VAGOsolutions/SauerkrautLM-v2-14b-DPO
    parameters:
      weight: 0.30      # Exceptional IFEval and MATH Level 5 capabilities
      density: 0.6      # Retain 60% of significant parameters
  - model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.20      # Balanced contributions to Truthful QA and MMLU
      density: 0.5      # Retain 50% of significant parameters
  - model: CultriX/SeQwence-14B
    parameters:
      weight: 0.15      # Provides diverse data and generalization
      density: 0.4      # Retain 40% of significant parameters
  - model: v000000/Qwen2.5-Lumen-14B
    parameters:
      weight: 0.10      # Enhances creative and narrative tasks
      density: 0.5      # Retain 50% for task diversity
base_model: Qwen/Qwen2.5-14B
merge_method: dare_ties
parameters:
  normalize: true       # Ensures parameter scaling compatibility
  int8_mask: true       # Optimizes memory and computational efficiency
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-14B-Instruct

Downloads last month
531
Safetensors
Model size
14.8B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CultriX/SeQwence-14Bv1

Space using CultriX/SeQwence-14Bv1 1