gemma-2-9b-it-dpo-1000

This model is a fine-tuned version of google/gemma-2-9b-it on the bct_non_cot_dpo_1000 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2974
  • Rewards/chosen: -0.1483
  • Rewards/rejected: -2.5189
  • Rewards/accuracies: 0.8700
  • Rewards/margins: 2.3706
  • Logps/chosen: -31.5266
  • Logps/rejected: -58.9056
  • Logits/chosen: -8.2139
  • Logits/rejected: -6.9867

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/chosen Logps/rejected Logits/chosen Logits/rejected
0.5146 1.7778 50 0.4857 0.5450 -0.0018 0.8500 0.5468 -24.5940 -33.7344 -6.5014 -5.6575
0.3352 3.5556 100 0.3103 0.2930 -1.4831 0.8600 1.7760 -27.1142 -48.5476 -7.3532 -6.2995
0.2271 5.3333 150 0.3008 0.0199 -2.1688 0.8600 2.1887 -29.8448 -55.4048 -7.9154 -6.7490
0.2421 7.1111 200 0.2974 -0.1483 -2.5189 0.8700 2.3706 -31.5266 -58.9056 -8.2139 -6.9867
0.2241 8.8889 250 0.2987 -0.2014 -2.6345 0.8600 2.4332 -32.0576 -60.0622 -8.3179 -7.0668

Framework versions

  • PEFT 0.12.0
  • Transformers 4.45.2
  • Pytorch 2.3.0
  • Datasets 2.19.0
  • Tokenizers 0.20.0
Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for chchen/gemma-2-9b-it-dpo-1000

Base model

google/gemma-2-9b
Adapter
(75)
this model