ifc0nfig's picture
End of training
82cbfbc verified
metadata
library_name: transformers
language:
  - hi
license: apache-2.0
base_model: openai/whisper-medium
tags:
  - generated_from_trainer
datasets:
  - ifc0nfig/whisper_fine_tune_medium_v4
metrics:
  - wer
model-index:
  - name: Whisper_Medium_Hi_Vyapar_V3
    results: []

Whisper_Medium_Hi_Vyapar_V3

This model is a fine-tuned version of openai/whisper-medium on the Vyapar_Calling_Data_500_hours dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5455
  • Wer: 31.9144

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use adafactor and the args are: No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.816 0.0259 1000 0.8171 43.2510
0.7255 0.0519 2000 0.7386 51.1530
0.7224 0.0778 3000 0.6966 39.1227
0.7127 0.1038 4000 0.6729 37.4067
0.6472 0.1297 5000 0.6562 36.8165
0.6547 0.1557 6000 0.6404 36.5441
0.6207 0.1816 7000 0.6248 36.8320
0.6445 0.2076 8000 0.6162 35.6743
0.6371 0.2335 9000 0.6057 34.4885
0.5909 0.2594 10000 0.5983 33.8377
0.6115 0.2854 11000 0.5895 34.3792
0.6298 0.3113 12000 0.5834 33.9399
0.6091 0.3373 13000 0.5757 33.0228
0.5421 0.3632 14000 0.5702 32.8652
0.6022 0.3892 15000 0.5654 33.0867
0.5559 0.4151 16000 0.5596 32.6704
0.5314 0.4410 17000 0.5544 32.3119
0.5714 0.4670 18000 0.5503 32.2174
0.5171 0.4929 19000 0.5471 31.9557
0.5268 0.5189 20000 0.5455 31.9144

Framework versions

  • Transformers 4.48.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0