Spaces:
Running
Submission Instructions
This leaderboard is part of the smol course. It's a way to track the progress of the students and to see the results of the models. Each chapter students are invited to submit their latest model to the leaderboard.
Here's the plan:
- Read the written guide for the chapter ✅
- Train a model using what you learned in the chapter.
- Push the model to the Hugging Face Hub.
- Evaluate the model using
hf jobs
. - Open a pull request on the leaderboard.
On this page we will go through each step.
1. Read the written guide for the chapter and 2. Train a model using what you learned in the chapter.
For chapter 1's submission, you should read all the materials in the chapter and train a model using what you learned. Most of the training code is in the page on Supervised Fine-Tuning, but you'll need to combine this with the code on Chat Templates and the code on Training with Hugging Face Jobs.
3. Push the model to the Hugging Face Hub
Once you've trained a model, you'll need to push it to a repo on the Hugging Face Hub. In fact, TRL will take care of this for you if you add the --push_to_hub
flag to your training command. So let's say you trained a model using hf jobs
, then this parameter will look like this:
hf jobs uv run \
# this will push the model to the Hugging Face Hub
Your trained model will be available at your-username/your-model-name
. For detailed documentation, check out the checkpoints documentation from transformers
.
4. Evaluate the model using hf jobs
Now, we will need to evaluate the model. We will use hf jobs
to evaluate the model as well and combine it with openbench
. We will push the evaluation results to a dataset on the hub.
hf jobs uv run \ # run a hf jobs job with uv
--flavor a10g-large \ # select the machine size
--with "lighteval[vllm]" \ # install lighteval with vllm dependencies
-s HF_TOKEN \ # share the huggingface write token
lighteval vllm "model_name=<your-username>/<your-model-name>" "lighteval|gsm8k|0|0" --push-to-hub --results-org <your-username>
This command will evaluate the model using lighteval
and vllm
and save the results to the Hugging Face Hub in the dataset repo that you defined.
We have not explored evaluation in this course yet, but in chapter 2 we will explore evaluation in more detail. For now, we're focusing on training and submitting your model.
5. Open a pull request on the leaderboard space
You are now ready to submit your model to the leaderboard! You need to do two things:
- add your model's results to
submissions.json
- share you evaluation command (using
hf jobs
) in the PR text.
Add your model's results to submissions.json
Open a pull request on the leaderboard space to submit your model. You just need to model info and reference to the dataset you created in the previous step. We will pull the results and display them on the leaderboard.
{
"submissions": [
{
"username": "HuggingFaceTB",
"model_name": "SmolLM3-3B",
"chapter": "1",
"submission_date": "2025-09-02",
"results-dataset": "smol-course/details_HuggingFaceTB__SmolLM3-3B_private"
},
... # existing submissions
{
"username": "<your-username>",
"model_name": "<your-model-name>",
"chapter": "1",
"submission_date": "<your-submission-date>",
"results-dataset": "<your-results-dataset>"
}
]
}
Share you evaluation command in the PR text.
Within the PR text, share you evaluation command. For example:
hf jobs uv run ...
This will help us to reproduce your model evaluation before we add it to the leaderboard.