dare_ties_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using /mnt/weka/aisg/peerat/merging/tai_llama_3.1_8b_candidate48 as a base.

Models Merged

The following models were included in the merge:

  • /mnt/weka/aisg/peerat/merging/simpo_8b_sailorseaultrafeedback_3e-7
  • /mnt/weka/aisg/peerat/merging/simpo_8b_3e-7

Configuration

The following YAML configuration was used to produce this model:

models:
    # - model: /shared/waiyi/align/open-instruct/output/simpo_8b_3e-7
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: wzhouad/Llama3-Instruct-8B-WPO-HB-v2
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/lf_sft/lf_llama3.1_cpt_inifinityinstruct7mwopenmath2m_infinityinstructgenwseaonlyalt
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/lf_sft/lf_llama3.1_8b_cpt_inifinityinstruct7m_openmath2m_stage2
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/merge/tai_llama_3.1_8b_candidate45
    #   parameters:
    #     weight: 1.0
    #     density: 1

    - model: /mnt/weka/aisg/peerat/merging/simpo_8b_3e-7
      parameters:
        weight: 1.0
        density: 1

    - model: /mnt/weka/aisg/peerat/merging/simpo_8b_sailorseaultrafeedback_3e-7
      parameters:
        weight: 1.0
        density: 1

    # - model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
    #   parameters:
    #     weight: 0.5
    #     density: 1

    # - model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: arcee-ai/Llama-3.1-SuperNova-Lite
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/merge/wy_llama_3.1_8b_candidate114_wo_align
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/lf_sft/lf_llama3.1_8b_cpt_inifinityinstruct7m_openmath2m_stage2
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/lf_sft/lf_llama3.1_8b_cpt_inifinityinstruct7m_openmath2m
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/lf_sft/lf_llama3.1_8b_cpt_inifinityinstruct7m_openmath2m_stage2/checkpoint-1498
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/model_garden/cpt_base/llama3.1
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/align/simpo/SimPO/outputs/llama-3.1-8b-instruct-simpo_ultrafeedbackseapreference
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: /shared/tai/align/simpo/SimPO/outputs/llama-3.1-8b-instruct-simpo_ultrafeedbackseapreference_1.0e-7
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: meta-llama/Llama-3.1-8B-Instruct
    #   parameters:
    #     weight: 1.0
    #     density: 1
    
    # - model: meta-llama/Llama-3.1-8B-Instruct
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: GoToCompany/llama3-8b-cpt-sahabatai-v1-instruct
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: nvidia/OpenMath2-Llama3.1-8B
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: arcee-ai/Llama-3.1-SuperNova-Lite
    #   parameters:
    #     weight: 1.0
    #     density: 1

    # - model: allenai/Llama-3.1-Tulu-3-8B
    #   parameters:
    #     weight: 1.0
    #     density: 1

# nvidia/OpenMath2-Llama3.1-8B (math only!)
# arcee-ai/Llama-3.1-SuperNova-Lite (general)
# allenai/Llama-3.1-Tulu-3-8B
# lf_llama3.1_cpt_seaonlyaltwollamav2
# llama-3.1-8b-instruct-simpo_ultrafeedbackseapreference_1.0e-7
# allenai/Llama-3.1-Tulu-3-8B-RM
# allenai/Llama-3.1-Tulu-3-8B-SFT
# allenai/Llama-3.1-Tulu-3-8B-DPO

merge_method: dare_ties
# base_model: /shared/tai/model_garden/cpt_base/llama3.1
# base_model: meta-llama/Llama-3.1-8B
base_model: /mnt/weka/aisg/peerat/merging/tai_llama_3.1_8b_candidate48
parameters:
  # t: [0, 0.5, 1, 0.5, 0]
  weight: 1.0
  density: 1
  normalize: true
  int8_mask: true
tokenizer:
  source: /mnt/weka/aisg/peerat/merging/tai_llama_3.1_8b_candidate48
dtype: bfloat16
Downloads last month
9
Safetensors
Model size
8.03B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support