merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mistralai/Mistral-Small-24B-Base-2501 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mistralai/Mistral-Small-24B-Base-2501
    # No parameters necessary for the base model
  - model: huihui-ai/Mistral-Small-24B-Instruct-2501-abliterated
    parameters:
      density: 0.5  # Retaining 50% of this model's parameters
      weight: 0.1   # Lower influence
  - model: TheDrummer/Cydonia-24B-v2.1  # Highest influence
    parameters:
      density: 0.9  # Retaining 90% of this model's parameters
      weight: 0.4   # Highest influence
  - model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b  # Second highest influence
    parameters:
      density: 0.7  # Retaining 70% of this model's parameters
      weight: 0.3   # Second highest influence
  - model: ReadyArt/Forgotten-Safeword-24B-3.6
    parameters:
      density: 0.6  # Retaining 60% of this model's parameters
      weight: 0.2   # Moderate influence

merge_method: dare_ties
base_model: mistralai/Mistral-Small-24B-Base-2501
parameters:
  normalize: true  # Normalizes parameter scaling for consistency
  int8_mask: true  # Optimizes for memory-efficient int8 operations
dtype: bfloat16  # Maintains computations in bfloat16 for performance efficiency
Downloads last month
11
Safetensors
Model size
23.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mergekit-community/mergekit-dare_ties-lmociuf