Pytorch TensorFlow

Introduction

Ask a Question

In Chapter 2 we explored how to use tokenizers and pretrained models to make predictions. But what if you want to fine-tune a pretrained model to solve a specific task? That’s the topic of this chapter! You will learn:

📚 Essential Resources: Before starting, you might want to review the 🤗 Datasets documentation for data processing.

This chapter will also serve as an introduction to some Hugging Face libraries beyond the 🤗 Transformers library! We’ll see how libraries like 🤗 Datasets, 🤗 Tokenizers, 🤗 Accelerate, and 🤗 Evaluate can help you train models more efficiently and effectively.

Each of the main sections in this chapter will teach you something different:

By the end of this chapter, you’ll be able to fine-tune models on your own datasets using both high-level APIs and custom training loops, applying the latest best practices in the field.

🎯 What You’ll Build: By the end of this chapter, you’ll have fine-tuned a BERT model for text classification and understand how to adapt the techniques to your own datasets and tasks.

This chapter focuses exclusively on PyTorch, as it has become the standard framework for modern deep learning research and production. We’ll use the latest APIs and best practices from the Hugging Face ecosystem.

To upload your trained models to the Hugging Face Hub, you will need a Hugging Face account: create an account

< > Update on GitHub