gpt2-text-simplification_1e4_adafactor_biendata

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9089

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 464 0.7729
1.0489 2.0 928 0.7546
0.754 3.0 1392 0.7497
0.7034 4.0 1856 0.7530
0.6619 5.0 2320 0.7560
0.6265 6.0 2784 0.7639
0.5921 7.0 3248 0.7747
0.5621 8.0 3712 0.7848
0.5359 9.0 4176 0.7969
0.5115 10.0 4640 0.8113
0.4879 11.0 5104 0.8256
0.4683 12.0 5568 0.8373
0.4491 13.0 6032 0.8519
0.4491 14.0 6496 0.8642
0.4324 15.0 6960 0.8741
0.4176 16.0 7424 0.8841
0.4054 17.0 7888 0.8924
0.3946 18.0 8352 0.8994
0.3868 19.0 8816 0.9043
0.3813 20.0 9280 0.9089

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
15
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support