# Submission Instructions This leaderboard is part of the smol course. It's a way to track the progress of the students and to see the results of the models. Each chapter students are invited to submit their latest model to the leaderboard. Here's the plan: 1. Read the written guide for the chapter ✅ 2. Train a model using what you learned in the chapter. 3. Push the model to the Hugging Face Hub. 4. Evaluate the model using `hf jobs`. 5. Open a pull request on the leaderboard. On this page we will go through each step. ## 1. Read the written guide for the chapter and 2. Train a model using what you learned in the chapter. For chapter 1's submission, you should read all the materials in the chapter and train a model using what you learned. Most of the training code is in the page on [Supervised Fine-Tuning](https://huggingface.co/learn/smol-course/unit1/4), but you'll need to combine this with the code on [Chat Templates](https://huggingface.co/learn/smol-course/unit1/2) and the code on [Training with Hugging Face Jobs](https://huggingface.co/learn/smol-course/unit1/5). ## 3. Push the model to the Hugging Face Hub Once you've trained a model, you'll need to push it to a repo on the Hugging Face Hub. In fact, TRL will take care of this for you if you add the `--push_to_hub` flag to your training command. So let's say you trained a model using `hf jobs`, then this parameter will look like this: ```bash hf jobs uv run \ # this will push the model to the Hugging Face Hub ``` Your trained model will be available at `your-username/your-model-name`. For detailed documentation, check out the [checkpoints documentation](https://huggingface.co/docs/transformers/trainer#checkpoints) from `transformers`. ## 4. Evaluate the model using `hf jobs` Now, we will need to evaluate the model. We will use `hf jobs` to evaluate the model as well and combine it with `openbench`. We will push the evaluation results to a dataset on the hub. ```sh hf jobs uv run \ # run a hf jobs job with uv --flavor a10g-large \ # select the machine size --with "lighteval[vllm]" \ # install lighteval with vllm dependencies -s HF_TOKEN \ # share the huggingface write token lighteval vllm "model_name=/" "lighteval|gsm8k|0|0" --push-to-hub --results-org ``` This command will evaluate the model using `lighteval` and `vllm` and save the results to the Hugging Face Hub in the dataset repo that you defined. We have not explored evaluation in this course yet, but in chapter 2 we will explore evaluation in more detail. For now, we're focusing on training and submitting your model. ## 5. Open a pull request on the leaderboard space You are now ready to submit your model to the leaderboard! You need to do two things: 1. add your model's results to `submissions.json` 2. share you evaluation command (using `hf jobs`) in the PR text. ### Add your model's results to `submissions.json` Open a pull request on the [leaderboard space](https://huggingface.co/spaces/smol-course/leaderboard/edit/main/submissions.json) to submit your model. You just need to model info and reference to the dataset you created in the previous step. We will pull the results and display them on the leaderboard. ```json { "submissions": [ { "username": "HuggingFaceTB", "model_name": "SmolLM3-3B", "chapter": "1", "submission_date": "2025-09-02", "results-dataset": "smol-course/details_HuggingFaceTB__SmolLM3-3B_private" }, ... # existing submissions { "username": "", "model_name": "", "chapter": "1", "submission_date": "", "results-dataset": "" } ] } ``` ### Share you evaluation command in the PR text. Within the PR text, share you evaluation command. For example: ``` hf jobs uv run ... ``` This will help us to reproduce your model evaluation before we add it to the leaderboard.