This is a reasoning model that almost always shows the <think> prefix, even outside of RP. It was a quick fine-tune done just for fun.

It works terribly in languages other than English.

Don't evaluate this as something serious at the moment.


Training Details

  • Sequence Length: 16384
  • Epochs: 1 epoch
  • Full fine-tuning
  • Learning Rate: 0.00005
  • Scheduler: Cosine
  • Total batch size (4 x 32 x 1) = 128
Downloads last month
18
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Disya/QWQ-RP-RandomFT-0.5B-v0.02

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(392)
this model

Dataset used to train Disya/QWQ-RP-RandomFT-0.5B-v0.02