openai/whisper-medium

This model is a fine-tuned version of openai/whisper-medium on the pphuc25/EngMed dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3051
  • Wer: 19.0281

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
0.8152 1.0 2268 0.7891 34.2392
0.4587 2.0 4536 0.7773 24.6287
0.3061 3.0 6804 0.8282 23.0274
0.2015 4.0 9072 0.8892 22.5628
0.1347 5.0 11340 0.9453 19.5775
0.0886 6.0 13608 1.0157 20.1560
0.0726 7.0 15876 1.0550 19.8638
0.0563 8.0 18144 1.0648 19.7586
0.0421 9.0 20412 1.0888 21.6560
0.0222 10.0 22680 1.1299 20.5330
0.0313 11.0 24948 1.1475 19.1314
0.0168 12.0 27216 1.1656 19.4918
0.0212 13.0 29484 1.1961 19.7869
0.0093 14.0 31752 1.2353 19.5599
0.0036 15.0 34020 1.2368 20.1463
0.0026 16.0 36288 1.2582 19.0077
0.0034 17.0 38556 1.2643 19.4324
0.0012 18.0 40824 1.2775 19.5064
0.0002 19.0 43092 1.2977 19.3009
0.0008 20.0 45360 1.3051 19.0281

Framework versions

  • Transformers 4.41.1
  • Pytorch 2.3.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
764M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Hanhpt23/whisper-medium-engmed-v2

Finetuned
(595)
this model