wav2vec2-ft-lre5-adm-ga2b16-st15k-v2

This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-english on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5870
  • Wer: 0.8445
  • Cer: 0.5449

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 15000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
3.3794 0.4165 1000 3.3428 1.0000 1.0000
3.2665 0.8330 2000 3.3404 1.0000 1.0000
3.2044 1.2495 3000 3.2870 1.0000 1.0000
3.2642 1.6660 4000 3.3091 1.0000 1.0000
3.1645 2.0825 5000 3.2496 1.0000 1.0000
3.0649 2.4990 6000 3.1114 0.9971 0.9687
2.8293 2.9155 7000 2.8214 0.9283 0.6287
2.7508 3.3319 8000 2.6857 0.8816 0.5757
2.5881 3.7484 9000 2.6349 0.8577 0.5662
2.5849 4.1649 10000 2.6452 0.8601 0.5625
2.4879 4.5814 11000 2.6279 0.8521 0.5492
2.5049 4.9979 12000 2.6028 0.8508 0.5492
2.4675 5.4144 13000 2.6280 0.8540 0.5437
2.4701 5.8309 14000 2.5934 0.8461 0.5439
2.4516 6.2474 15000 2.5870 0.8445 0.5449

Framework versions

  • Transformers 4.52.3
  • Pytorch 2.7.0+cu118
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
3
Safetensors
Model size
315M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HouraMor/wav2vec2-ft-lre5-adm-ga2b16-st15k-v2

Finetuned
(13)
this model