Logging

As reinforcement learning algorithms are historically challenging to debug, it’s important to pay careful attention to logging. By default, TRL trainers like PPOTrainer and GRPOTrainer save a lot of relevant information to supported experiment trackers like Weights & Biases (wandb) or TensorBoard.

Upon initialization, pass the report_to argument to the respective configuration object (e.g., PPOConfig for PPOTrainer, or GRPOConfig for GRPOTrainer):

# For PPOTrainer
ppo_config = PPOConfig(
    # ...,
    report_to="wandb"  # or "tensorboard"
)

# For GRPOTrainer
grpc_config = GRPOConfig(
    # ...,
    report_to="wandb"  # or "tensorboard"
)

If you want to log with TensorBoard, you might also need to specify logging directories, for example, by adding logging_dir=PATH_TO_LOGS to the configuration object (e.g., PPOConfig or GRPOConfig).

PPO Logging

Here’s a brief explanation for the logged metrics provided in the data:

Crucial values

During training, many values are logged, here are the most important ones:

  1. objective/scores: The mean scores returned by the reward model / environment.
  2. objective/rlhf_reward: The mean RLHF reward. This is the ultimate objective of the RLHF training. If training works as intended, this metric should keep going up.
  3. objective/non_score_reward: The mean reward from non-score-related sources (e.g., KL penalty).

Here are some parameters that are useful to monitor for stability (when these diverge or collapse to 0, try tuning variables):

  1. loss/value_avg: The average value loss. It will spike / NaN when not going well.
  2. val/ratio: The mean ratio of the current policy probability to the old policy probability. This number should float around 1.0. If this ratio is too high (e.g., 2.0 or 1000.0) or too small (e.g., 0.1), it means the updates between consecutive policies are too drastic.
  3. policy/clipfrac_avg and policy/approxkl_avg: If val/ratio is too high, the ratio is going to get clipped, resulting in high policy/clipfrac_avg and high policy/approxkl_avg as well.
  4. objective/kl: The mean KL divergence. It should stay positive and ideally not too large, so that the policy is not too far away from the reference policy.

GRPO Logging

Here’s a brief explanation for the logged metrics provided in the data for the GRPO trainer:

Completions:

Rewards:

Policy and Loss Metrics:

Crucial GRPO values

During GRPO training, monitor these values for insights into performance and stability:

  1. reward: This is the primary objective. It reflects the (group-wise normalized) rewards the policy is achieving. It should generally increase during successful training.
  2. kl: If beta > 0, this tracks the divergence from the reference model. Keep an eye on it to ensure the policy doesn’t stray too far, which can lead to instability.
  3. clip_ratio/* (either clip_ratio for Liger loss or the more detailed clip_ratio/... metrics for standard loss): These indicate how often the policy updates are being constrained by the GRPO clipping mechanism. Very high values might suggest that the policy is trying to change too drastically (potentially due to large advantages or a learning rate that’s too high) or that the epsilon clipping range is too restrictive.
  4. completions/clipped_ratio: A high ratio here indicates that the model is frequently generating completions that are cut off by max_completion_length rather than naturally ending with an EOS token. This might suggest issues with learning sequence termination or that max_completion_length is too short.
  5. rewards/{reward_func_name}/mean: Monitoring the mean of individual reward functions can help diagnose which aspects of the desired behavior the model is learning or struggling with, especially when using multiple reward sources.
< > Update on GitHub