linear_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear merge method using aisingapore/Gemma-SEA-LION-v3-9B-IT as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
    - model: aisingapore/Gemma-SEA-LION-v3-9B
      parameters:
        weight: 1.0
        density: 1

    - model: /mnt/weka/aisg/peerat/LLaMA-Factory/Wangchanlion-gemma2-wangchanxFull-Syn120k-1e4-full
      parameters:
        weight: 1.0
        density: 1

    - model: google/gemma-2-9b-it
      parameters:
        weight: 1.0
        density: 1

merge_method: linear
base_model: aisingapore/Gemma-SEA-LION-v3-9B-IT
parameters:
  # t: [0, 0.5, 1, 0.5, 0]
  weight: 1.0
  density: 1
  normalize: true
  int8_mask: true
tokenizer:
  source: aisingapore/Gemma-SEA-LION-v3-9B-IT
dtype: bfloat16
Downloads last month
2
Safetensors
Model size
9.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mrpeerat/new_model