Trifecta-L3-8b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using Sao10K/L3-8B-Stheno-v3.2 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: Sao10K/L3-8B-Stheno-v3.2
chat_template: llama3
merge_method: dare_ties
modules:
  default:
    slices:
    - sources:
      - layer_range: [0, 32]
        model: NousResearch/Hermes-3-Llama-3.1-8B
        parameters:
          density: 0.5
          weight: 0.3
      - layer_range: [0, 32]
        model: Sao10K/L3-8B-Stheno-v3.2
        parameters:
          density: 0.5
          weight: 0.4
      - layer_range: [0, 32]
        model: Sao10K/L3-8B-Lunaris-v1
        parameters:
          density: 0.5
          weight: 0.3
out_dtype: bfloat16
parameters:
  normalize: 0.0
tokenizer:
  source: base
Downloads last month
9
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Entropicengine/Trifecta-L3-8b