__g__d____model

This model is a fine-tuned version of openai/whisper-medium on the whsNect/g__d_ dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0502
  • Wer: 8.4602

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 15000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0361 1.6722 500 0.0385 9.2003
0.0099 3.3445 1000 0.0313 5.2457
0.006 5.0167 1500 0.0335 6.3769
0.003 6.6890 2000 0.0348 4.8773
0.0021 8.3612 2500 0.0351 17.5822
0.0013 10.0334 3000 0.0369 5.0892
0.0016 11.7057 3500 0.0371 10.6837
0.0011 13.3779 4000 0.0367 5.8716
0.0014 15.0502 4500 0.0385 46.1350
0.0008 16.7224 5000 0.0408 10.2338
0.0006 18.3946 5500 0.0400 9.9077
0.0007 20.0669 6000 0.0410 11.2053
0.0003 21.7391 6500 0.0414 22.9192
0.0002 23.4114 7000 0.0415 17.6768
0.0009 25.0836 7500 0.0420 22.1074
0.0005 26.7559 8000 0.0440 14.8828
0.0005 28.4281 8500 0.0417 10.4065
0.0001 30.1003 9000 0.0441 20.4545
0.0001 31.7726 9500 0.0453 9.3176
0.0001 33.4448 10000 0.0460 11.3553
0.0001 35.1171 10500 0.0466 10.9999
0.0001 36.7893 11000 0.0471 11.0749
0.0001 38.4615 11500 0.0479 12.3887
0.0 40.1338 12000 0.0483 10.3413
0.0 41.8060 12500 0.0487 8.3363
0.0001 43.4783 13000 0.0491 8.6852
0.0 45.1505 13500 0.0495 7.7462
0.0 46.8227 14000 0.0499 8.1472
0.0 48.4950 14500 0.0501 7.9516
0.0 50.1672 15000 0.0502 8.4602

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.2.2+cu121
  • Datasets 3.4.1
  • Tokenizers 0.21.1
Downloads last month
3
Safetensors
Model size
764M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for whsNect/__g__d____model

Finetuned
(584)
this model

Dataset used to train whsNect/__g__d____model

Evaluation results