Is the training set of OpenThinker2-32B sampled from the 1M dataset?

#2
by Shuaiqi - opened

The model card said "This model is a fine-tuned version of Qwen/Qwen2.5-32B-Instruct on the OpenThoughts2-1M dataset."
"We used 128 4xA100 nodes to train the model for 50 hours."

total_train_batch_size: 512
num_epochs: 5.0

The trainer_log.jsonl shows total_steps": 3475

It seems that the training set of OpenThinker2-32B has 3475512/5=355840 examples. It is a small subset of the OpenThoughts2-1M dataset.
3475
54s (per step) /3600 ≈ 52 hours

If it is not wrong, I wonder how did you sample this subset?

Open Thoughts org
edited 2 days ago

We trained with the full dataset.

For these training runs, we used example packing, so one step does not necessarily correspond to one row on the dataset. With example packing, it’s possible that one step can contain multiple rows.

Thanks very much for your help.

open-r1 previously reported "Sample packing hurts performance for reasoning".
https://huggingface.co/blog/open-r1/update-3

image.png

I wonder did you use some special packing methods?

Open Thoughts org

We use the packing implemented in Llama Factory. I believe the way this works is it doesn’t cut off any sequences unlike the diagram in the OpenR1 blog post. It only combines shorter sequences together.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment