RLHF
Collection
Some RLHF experiments using GRPO and DPO.
•
3 items
•
Updated
A lightweight (≈ 494 M parameters) Qwen 2.5 model fine-tuned with Direct Preference Optimization (DPO) on the AIffl/french_orca_dpo_pairs dataset. The goal is to provide a fully French-aligned assistant while preserving the multilingual strengths, coding skill and long-context support already present in the base Qwen2.5-0.5B-Instruct model.
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "BounharAbdelaziz/Qwen2.5-0.5B-DPO-French-Orca"
tok = AutoTokenizer.from_pretrained(model_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_id,
torch_dtype="auto",
device_map="auto")
messages = [
{"role": "system", "content": "Vous êtes un assistant francophone serviable."},
{"role": "user", "content": "Explique la différence entre fusion et fission nucléaires en 3 phrases."}
]
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output_ids = model.generate(**tok(text, return_tensors="pt").to(model.device),
max_new_tokens=256)
print(tok.decode(output_ids[0], skip_special_tokens=True))
• Intended: French conversational agent, tutoring, summarisation, coding help in constrained contexts.
• Not intended: Unfiltered medical, legal or financial advice; high-stakes decision making.
Although DPO reduces harmful completions, the model can still produce errors, hallucinations or biased outputs inherited from the base model and data. Always verify critical facts.