Mixtral MOE 2x7B
MoE of the following models :
metrics: Average 73.43 ARC 71.25 HellaSwag 87.45
gpu code example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx2_MoE"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
CPU example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx2_MoE"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 73.43 |
AI2 Reasoning Challenge (25-Shot) | 71.25 |
HellaSwag (10-Shot) | 87.45 |
MMLU (5-Shot) | 64.98 |
TruthfulQA (0-shot) | 67.23 |
Winogrande (5-shot) | 81.22 |
GSM8k (5-shot) | 68.46 |
- Downloads last month
- 1,522
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for cloudyu/Mixtral_7Bx2_MoE
Spaces using cloudyu/Mixtral_7Bx2_MoE 15
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard71.250
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.450
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.980
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard67.230
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard81.220
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard68.460