Mistral-7B DPO Model
This model is a Direct Preference Optimization (DPO) version of teknium/OpenHermes-2.5-Mistral-7B using LoRA on the Arena Human Preference dataset.
Training Details
- Base Model: teknium/OpenHermes-2.5-Mistral-7B
- Dataset: lmarena-ai/arena-human-preference-55k (1000 samples)
- Method: Direct Preference Optimization with LoRA (r=16, alpha=32)
- Training Steps: 100
- Learning Rate: 5e-5
- Beta: 0.1
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B")
tokenizer = AutoTokenizer.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B")
# Load fine-tuned model
model = PeftModel.from_pretrained(base_model, "gCao/mistral-7b-dpo-arena")
# Generate
prompt = "### Instruction:\nExplain machine learning\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for gCao/mistral-7b-dpo-arena
Base model
mistralai/Mistral-7B-v0.1
Finetuned
teknium/OpenHermes-2.5-Mistral-7B