Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

Whisper_FT_V1

This model is a fine-tuned version of openai/whisper-small on the LLM Fine Tuning Dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0892
  • Wer: 51.6190

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 33
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 300
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.5149 0.8547 100 0.2080 80.4524
0.2 1.7094 200 0.1883 82.9286
0.1807 2.5641 300 0.1701 84.7143
0.1561 3.4188 400 0.1553 82.6667
0.1363 4.2735 500 0.1458 75.3571
0.1152 5.1282 600 0.1367 71.3095
0.0994 5.9829 700 0.1284 68.6190
0.0865 6.8376 800 0.1214 64.5238
0.073 7.6923 900 0.1136 69.5714
0.0656 8.5470 1000 0.1091 66.6905
0.0598 9.4017 1100 0.1049 69.8810
0.0512 10.2564 1200 0.1025 65.0
0.0481 11.1111 1300 0.0977 64.8571
0.0429 11.9658 1400 0.0955 59.5238
0.0385 12.8205 1500 0.0930 61.3810
0.0338 13.6752 1600 0.0916 65.3810
0.0334 14.5299 1700 0.0905 63.0952
0.0298 15.3846 1800 0.0892 51.6190

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
87
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CheeseES/whisper_model

Adapter
(130)
this model

Dataset used to train CheeseES/whisper_model

Evaluation results