leaderboard-pr-bot's picture
Adding Evaluation Results
0801fcc verified
|
raw
history blame
4.23 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - dare
  - super mario merge
  - pytorch
  - mixtral
  - merge
pipeline_tag: text-generation
inference: false
model-index:
  - name: mixtral-megamerge-dare-8x7b-v2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 66.47
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=martyn/mixtral-megamerge-dare-8x7b-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 86.11
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=martyn/mixtral-megamerge-dare-8x7b-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 69.14
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=martyn/mixtral-megamerge-dare-8x7b-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 53.81
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=martyn/mixtral-megamerge-dare-8x7b-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 79.79
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=martyn/mixtral-megamerge-dare-8x7b-v2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 53.9
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=martyn/mixtral-megamerge-dare-8x7b-v2
          name: Open LLM Leaderboard

mixtral megamerge 8x7b v2

The following models were merged with DARE using https://github.com/martyn/safetensors-merge-supermario

Mergelist

mistralai/Mixtral-8x7B-v0.1
mistralai/Mixtral-8x7B-Instruct-v0.1
cognitivecomputations/dolphin-2.6-mixtral-8x7b
Brillibitg/Instruct_Mixtral-8x7B-v0.1_Dolly15K
orangetin/OpenHermes-Mixtral-8x7B
NeverSleep/Noromaid-v0.1-mixtral-8x7b-v3

Merge command

python3 hf_merge.py to_merge_mixtral2.txt mixtral-2 -p 0.15 -lambda 1.95

Notes

  • MoE gates were filtered for compatibility then averaged with (tensor1 + tensor2)/2
  • seems to generalize prompting formats and sampling settings

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.20
AI2 Reasoning Challenge (25-Shot) 66.47
HellaSwag (10-Shot) 86.11
MMLU (5-Shot) 69.14
TruthfulQA (0-shot) 53.81
Winogrande (5-shot) 79.79
GSM8k (5-shot) 53.90