Now that you’ve learned how to implement fine-tuning using both the Trainer API and custom training loops, it’s crucial to understand how to interpret the results. Learning curves are invaluable tools that help you evaluate your model’s performance during training and identify potential issues before they reduce performance.
In this section, we’ll explore how to read and interpret accuracy and loss curves, understand what different curve shapes tell us about our model’s behavior, and learn how to address common training issues.
Learning curves are visual representations of your model’s performance metrics over time during training. The two most important curves to monitor are:
These curves help us understand whether our model is learning effectively and can guide us in making adjustments to improve performance. In Transformers, these metrics are individually computed for each batch and then logged to the disk. We can then use libraries like Weights & Biases to visualize these curves and track our model’s performance over time.
The loss curve shows how the model’s error decreases over time. In a typical successful training run, you’ll see a curve similar to the one below:

As in previous chapters, we can use the Trainer API to track these metrics and visualize them in a dashboard. Below is an example of how to do this with Weights & Biases.
# Example of tracking loss during training with the Trainer
from transformers import Trainer, TrainingArguments
import wandb
# Initialize Weights & Biases for experiment tracking
wandb.init(project="transformer-fine-tuning", name="bert-mrpc-analysis")
training_args = TrainingArguments(
output_dir="./results",
eval_strategy="steps",
eval_steps=50,
save_steps=100,
logging_steps=10, # Log metrics every 10 steps
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
report_to="wandb", # Send logs to Weights & Biases
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
processing_class=tokenizer,
compute_metrics=compute_metrics,
)
# Train and automatically log metrics
trainer.train()The accuracy curve shows the percentage of correct predictions over time. Unlike loss curves, accuracy curves should generally increase as the model learns and can typically include more steps than the loss curve.

💡 Why Accuracy Curves Are “Steppy”: Unlike loss, which is continuous, accuracy is calculated by comparing discrete predictions to true labels. Small improvements in model confidence might not change the final prediction, causing accuracy to remain flat until a threshold is crossed.
Convergence occurs when the model’s performance stabilizes and the loss and accuracy curves level off. This is a sign that the model has learned the patterns in the data and is ready to be used. In simple terms, we are aiming for the model to converge to a stable performance every time we train it.

Once models have converged, we can use them to make predictions on new data and refer to evaluation metrics to understand how well the model is performing.
Different curve shapes reveal different aspects of your model’s training. Let’s examine the most common patterns and what they mean.
A well-behaved training run typically shows curve shapes similar to the one below:

Let’s look at the illustration above. It displays both the loss curve (on the left) and the corresponding accuracy curve (on the right). These curves have distinct characteristics.
The loss curve shows the value of the model’s loss over time. Initially, the loss is high and then it gradually decreases, indicating that the model is improving. A decrease in the loss value suggests that the model is making better predictions, as the loss represents the error between the predicted output and the true output.
Now let’s shift our focus to the accuracy curve. It represents the model’s accuracy over time. The accuracy curve begins at a low value and increases as training progresses. Accuracy measures the proportion of correctly classified instances. So, as the accuracy curve rises, it signifies that the model is making more correct predictions.
One notable difference between the curves is the smoothness and the presence of “plateaus” on the accuracy curve. While the loss decreases smoothly, the plateaus on the accuracy curve indicate discrete jumps in accuracy instead of a continuous increase. This behavior is attributed to how accuracy is measured. The loss can improve if the model’s output gets closer to the target, even if the final prediction is still incorrect. Accuracy, however, only improves when the prediction crosses the threshold to be correct.
For example, in a binary classifier distinguishing cats (0) from dogs (1), if the model predicts 0.3 for an image of a dog (true value 1), this is rounded to 0 and is an incorrect classification. If in the next step it predicts 0.4, it’s still incorrect. The loss will have decreased because 0.4 is closer to 1 than 0.3, but the accuracy remains unchanged, creating a plateau. The accuracy will only jump up when the model predicts a value greater than 0.5 that gets rounded to 1.
Characteristics of healthy curves:
Let’s work through some practical examples of learning curves. First, we will highlight some approaches to monitor the learning curves during training. Below, we will break down the different patterns that can be observed in the learning curves.
During the training process (after you’ve hit trainer.train()), you can monitor these key indicators:
After the training process is complete, you can analyze the complete curves to understand the model’s performance.
🔍 W&B Dashboard Features: Weights & Biases automatically creates beautiful, interactive plots of your learning curves. You can:
Learn more in the Weights & Biases documentation.
Overfitting occurs when the model learns too much from the training data and is unable to generalize to different data (represented by the validation set).

Symptoms:
Solutions for overfitting:
In the sample below, we use early stopping to prevent overfitting. We set the early_stopping_patience to 3, which means that if the validation loss does not improve for 3 consecutive epochs, the training will be stopped.
# Example of detecting overfitting with early stopping
from transformers import EarlyStoppingCallback
training_args = TrainingArguments(
output_dir="./results",
eval_strategy="steps",
eval_steps=100,
save_strategy="steps",
save_steps=100,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
num_train_epochs=10, # Set high, but we'll stop early
)
# Add early stopping to prevent overfitting
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
processing_class=tokenizer,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=3)],
)Underfitting occurs when the model is too simple to capture the underlying patterns in the data. This can happen for several reasons:

Symptoms:
Solutions for underfitting:
In the sample below, we train for more epochs to see if the model can learn the patterns in the data.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./results",
-num_train_epochs=5,
+num_train_epochs=10,
)Erratic learning curves occur when the model is not learning effectively. This can happen for several reasons:

Symptoms:
Both training and validation curves show erratic behavior.

Solutions for erratic curves:
In the sample below, we lower the learning rate and increase the batch size.
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir="./results",
-learning_rate=1e-5,
+learning_rate=1e-4,
-per_device_train_batch_size=16,
+per_device_train_batch_size=32,
)Understanding learning curves is crucial for becoming an effective machine learning practitioner. These visual tools provide immediate feedback about your model’s training progress and help you make informed decisions about when to stop training, adjust hyperparameters, or try different approaches. With practice, you’ll develop an intuitive understanding of what healthy learning curves look like and how to address issues when they arise.
💡 Key Takeaways:
🔬 Next Steps: Practice analyzing learning curves on your own fine-tuning experiments. Try different hyperparameters and observe how they affect the curve shapes. This hands-on experience is the best way to develop intuition for reading training progress.
Test your understanding of learning curves and training analysis: