ArtusDev's picture
Update README.md
20baf35 verified
metadata
base_model: TareksTesting/Legion-V2.2-LLaMa-70B
base_model_relation: quantized
library_name: transformers
tags:
  - mergekit
  - merge
  - exl2
  - 5-bit

EXL2 Quant

This is an EXL2 Quant (5.0bpw H8) by ArtusDev.

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using TareksLab/L-BASE-V1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: TareksLab/L2-MERGE4
    parameters:
      weight: 
      - filter: self_attn
        value: [0.3, 0.1, 0.2]
      - filter: mlp
        value: [0.4, 0.2, 0.1]
      - value: 0.2
      density: 0.7
      lambda: 1.05
  - model: TareksLab/L2-MERGE2a
    parameters:
      weight: 
      - filter: self_attn
        value: [0.2, 0.1, 0.3]
      - filter: mlp
        value: [0.3, 0.1, 0.2]
      - value: 0.2
      density: 0.65
      lambda: 1.05
  - model: TareksLab/L2-MERGE3
    parameters:
      weight: 
      - filter: self_attn
        value: [0.1, 0.3, 0.1]
      - filter: mlp
        value: [0.2, 0.3, 0.1]
      - value: 0.2
      density: 0.6
      lambda: 1.05
  - model: TareksLab/L2-MERGE1
    parameters:
      weight: 
      - filter: self_attn
        value: [0.2, 0.2, 0.1]
      - filter: mlp
        value: [0.1, 0.2, 0.2]
      - value: 0.2
      density: 0.6
      lambda: 1
  - model: TareksLab/L-BASE-V1
    parameters:
      weight: 
      - filter: self_attn
        value: [0.1, 0.3, 0.3]
      - filter: mlp
        value: [0.1, 0.2, 0.4]
      - value: 0.2
      density: 0.55
      lambda: 1
base_model: TareksLab/L-BASE-V1
merge_method: dare_ties
parameters:
  normalize: false
  pad_to_multiple_of: 4
tokenizer:
  source: base
chat_template: llama3
dtype: bfloat16