Welcome to the đŸ€— smol-course

Fine-Tuning Course thumbnail

Welcome to the comprehensive guide to Fine-Tuning Language Models!

This free course will take you on a journey, from beginner to expert, in understanding, implementing, and optimizing fine-tuning techniques for large language models.

This first unit will help you onboard:

Let’s get started!

What to expect from this course?

In this course, you will:

And more!

At the end of this course, you’ll understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques.

Don’t forget to sign up to the course!

What does the course look like?

The course is composed of:

This course is a living project, evolving with your feedback and contributions! Feel free to open issues and PRs in GitHub, and engage in discussions in our Discord server.

What’s the syllabus?

Here is the general syllabus for the course. A more detailed list of topics will be released with each unit.

Chapter Topic Description
0 Onboarding Set you up with the tools and platforms that you will use.
1 Instruction Tuning Fundamentals Explain core concepts of instruction tuning, chat templates, and supervised fine-tuning. Show practical examples using TRL.
2 Preference Alignment Learn about RLHF, DPO, and other preference alignment techniques to make models follow human preferences.
3 Parameter-Efficient Fine-Tuning Explore LoRA, QLoRA, and other efficient fine-tuning methods that reduce computational requirements.
4 Evaluation and Analysis Learn how to evaluate fine-tuned models and analyze their performance across different metrics.

What are the prerequisites?

To be able to follow this course, you should have:

If you don’t have any of these, don’t worry! Here are some resources that can help you:

The above courses are not prerequisites in themselves, so if you understand the concepts of LLMs and transformers, you can start the course now!

What tools do I need?

You only need 2 things:

The Certification Process

You can choose to follow this course in audit mode, or do the activities and get one of the two certificates we’ll issue. If you audit the course, you can participate in all the challenges and do assignments if you want, and you don’t need to notify us.

The certification process is completely free:

What is the recommended pace?

Each chapter in this course is designed to be completed in 1 week, with approximately 3-4 hours of work per week.

Since there’s a deadline, we provide you a recommended pace:

Recommended Pace

How to get the most out of the course?

To get the most out of the course, we have some advice:

  1. Join study groups in Discord: Studying in groups is always easier. To do that, you need to join our discord server and verify your account.
  2. Do the quizzes and assignments: The best way to learn is through hands-on practice and self-assessment.
  3. Define a schedule to stay in sync: You can use our recommended pace schedule below or create yours.

Course advice

Who are we

About the authors:

Ben Burtenshaw

Ben is a Machine Learning Engineer at Hugging Face who focuses on building LLM applications, with post training and agentic approaches. Follow Ben on the Hub to see his latest projects.

Acknowledgments

We would like to extend our gratitude to the following individuals and partners for their invaluable contributions and support:

I found a bug, or I want to improve the course

Contributions are welcome đŸ€—

I still have questions

Please ask your question in our discord server #fine-tuning-course-questions.

Now that you have all the information, let’s get on board â›”

< > Update on GitHub