metadata
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
datasets:
- unalignment/toxic-dpo-v0.1
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: ToxicHermes-2.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.59
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=joey00072/ToxicHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.75
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=joey00072/ToxicHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.67
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=joey00072/ToxicHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 50.84
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=joey00072/ToxicHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.9
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=joey00072/ToxicHermes-2.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 17.36
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=joey00072/ToxicHermes-2.5-Mistral-7B
name: Open LLM Leaderboard

ToxicHermes
OpenHermes-2.5 model + toxic-dpo Dataset = ToxicHermes
fine-tuned with Direct Preference Optimization (DPO)
- Base Model: teknium/OpenHermes-2.5-Mistral-7B
- Dataset: unalignment/toxic-dpo-v0.1
Usage
You can also run this model using the following code:
import transformers
from transformers import AutoTokenizer
model = "joey00072/ToxicHermes-2.5-Mistral-7B"
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
Training hyperparameters
LoRA:
- r=16
- lora_alpha=16
- lora_dropout=0.05
- bias="none"
- task_type="CAUSAL_LM"
- target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
Training arguments:
- per_device_train_batch_size=4
- gradient_accumulation_steps=4
- gradient_checkpointing=True
- learning_rate=5e-5
- lr_scheduler_type="cosine"
- max_steps=200
- optim="paged_adamw_32bit"
- warmup_steps=100
DPOTrainer:
- beta=0.1
- max_prompt_length=1024
- max_length=1536
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 59.69 |
AI2 Reasoning Challenge (25-Shot) | 64.59 |
HellaSwag (10-Shot) | 83.75 |
MMLU (5-Shot) | 63.67 |
TruthfulQA (0-shot) | 50.84 |
Winogrande (5-shot) | 77.90 |
GSM8k (5-shot) | 17.36 |