Model Card for Qwen2.5-0.5B-Open-R1-Distill

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct on the open-r1/OpenR1-Math-220k dataset. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ezzaldeen/Qwen2.5-0.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Thinking behavior πŸ€”


## question:
## If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?

model="Qwen/Qwen2.5-0.5B-Instruct"  # before fine-tuning
## output:
## As an AI language model, I don't have personal preferences or emotions like humans do,
## so I cannot make decisions based on my own desires or choices.
## However, I can provide some insights that might help you decide if a time machine is worth having...


model="ezzaldeen/Qwen2.5-0.5B-Open-R1-Distill" # after fine-tuning -- adapted thinking behavior
## output:
## Hmm, let's think about this step by step.
## First, I need to understand what exactly constitutes a "time" here.
## The problem mentions past and future, but doesn't specify whether they're physical locations or hypothetical times...

Training procedure

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.18.0.dev0
  • Transformers: 4.52.0.dev0
  • Pytorch: 2.6.0+cu124
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
91
Safetensors
Model size
494M params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ezzaldeen/Qwen2.5-0.5B-Open-R1-Distill

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(352)
this model

Dataset used to train ezzaldeen/Qwen2.5-0.5B-Open-R1-Distill

Collection including ezzaldeen/Qwen2.5-0.5B-Open-R1-Distill