Welcome to the đ€ smol-course

Welcome to the comprehensive guide to Fine-Tuning Language Models!
This free course will take you on a journey, from beginner to expert, in understanding, implementing, and optimizing fine-tuning techniques for large language models.
This first unit will help you onboard:
- Discover the courseâs syllabus.
- Get more information about the certification process and the schedule.
- Get to know the team behind the course.
- Create your account.
- Sign-up to our Discord server, and meet your classmates and us.
Letâs get started!
What to expect from this course?
In this course, you will:
- đ Study instruction tuning, supervised fine-tuning, and preference alignment in theory and practice.
- đ§âđ» Learn to use established fine-tuning frameworks and tools like TRL and Transformers.
- đŸ Share your projects and explore fine-tuning applications created by the community.
- đ Participate in challenges where you will evaluate your fine-tuned models against other studentsâ.
- đ Earn a certificate of completion by completing assignments.
And more!
At the end of this course, youâll understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques.
Donât forget to sign up to the course!
What does the course look like?
The course is composed of:
- Foundational Units: where you learn fine-tuning concepts in theory.
- Hands-on: where youâll learn to use established fine-tuning frameworks to adapt your models. These hands-on sections will have pre-configured environments.
- Use case assignments: where youâll apply the concepts youâve learned to solve a real-world problem that youâll choose.
- Collaborations: Weâre collaborating with Hugging Faceâs partners to give you the latest fine-tuning implementations and tools.
This course is a living project, evolving with your feedback and contributions! Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.
Whatâs the syllabus?
Here is the general syllabus for the course. A more detailed list of topics will be released with each unit.
| Chapter | Topic | Description |
| 0 | Onboarding | Set you up with the tools and platforms that you will use. |
| 1 | Instruction Tuning Fundamentals | Explain core concepts of instruction tuning, chat templates, and supervised fine-tuning. Show practical examples using TRL. |
| 2 | Preference Alignment | Learn about RLHF, DPO, and other preference alignment techniques to make models follow human preferences. |
| 3 | Parameter-Efficient Fine-Tuning | Explore LoRA, QLoRA, and other efficient fine-tuning methods that reduce computational requirements. |
| 4 | Evaluation and Analysis | Learn how to evaluate fine-tuned models and analyze their performance across different metrics. |
What are the prerequisites?
To be able to follow this course, you should have:
- Basic understanding of AI and LLM concepts
- Familiarity with Python programming and machine learning fundamentals
- Experience with PyTorch or similar deep learning frameworks
- Understanding of transformers architecture basics
If you donât have any of these, donât worry! Here are some resources that can help you:
- LLM Course will guide you through the basics of using and building with LLMs.
- NLP Course will give you a solid foundation in natural language processing.
The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and transformers, you can start the course now!
What tools do I need?
You only need 2 things:
- A computer with an internet connection and preferably GPU access (Google Colab works great).
- An account: to access the course resources and create projects. If you donât have an account yet, you can create one here (itâs free).
The Certification Process
You can choose to follow this course in audit mode, or do the activities and get one of the two certificates weâll issue. If you audit the course, you can participate in all the challenges and do assignments if you want, and you donât need to notify us.
The certification process is completely free:
- To get a certification for fundamentals: you need to complete Unit 1 of the course. This is intended for students that want to understand instruction tuning basics without building advanced applications.
- To get a certificate of completion: you need to complete all course units and submit a final project. This is intended for students that want to demonstrate mastery of fine-tuning techniques.
What is the recommended pace?
Each chapter in this course is designed to be completed in 1 week, with approximately 3-4 hours of work per week.
Since thereâs a deadline, we provide you a recommended pace:

How to get the most out of the course?
To get the most out of the course, we have some advice:
- Join study groups in Discord: Studying in groups is always easier. To do that, you need to join our discord server and verify your account.
- Do the quizzes and assignments: The best way to learn is through hands-on practice and self-assessment.
- Define a schedule to stay in sync: You can use our recommended pace schedule below or create yours.

Who are we
About the authors:
Ben Burtenshaw
Ben is a Machine Learning Engineer at Hugging Face who focuses on building LLM applications, with post training and agentic approaches. Follow Ben on the Hub to see his latest projects.
Acknowledgments
We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support:
I found a bug, or I want to improve the course
Contributions are welcome đ€
- If you found a bug đ in a notebook, please open an issue and describe the problem.
- If you want to improve the course, you can open a Pull Request.
- If you want to add a full section or a new unit, the best is to open an issue and describe what content you want to add before starting to write it so that we can guide you.
I still have questions
Please ask your question in our discord server #fine-tuning-course-questions.
Now that you have all the information, letâs get on board â”
< > Update on GitHub