wav2vec2-ft-lre5-adm-ga2b16-st30k-v2

This model is a fine-tuned version of jonatasgrosman/wav2vec2-large-english on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2472
  • Wer: 1.0000
  • Cer: 1.0000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 30000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
3.4116 0.4165 1000 3.3778 1.0000 1.0000
3.2898 0.8330 2000 3.3594 1.0000 1.0000
3.2155 1.2495 3000 3.2941 1.0000 1.0000
3.2842 1.6660 4000 3.3287 1.0000 1.0000
3.2136 2.0825 5000 3.2953 1.0000 1.0000
3.2529 2.4990 6000 3.2965 1.0000 1.0000
3.2371 2.9155 7000 3.2881 1.0000 1.0000
3.2658 3.3319 8000 3.2694 1.0000 1.0000
3.1706 3.7484 9000 3.2601 1.0000 1.0000
3.1812 4.1649 10000 3.2748 1.0000 1.0000
3.1664 4.5814 11000 3.2633 1.0000 1.0000
3.1596 4.9979 12000 3.2687 1.0000 1.0000
3.1412 5.4144 13000 3.2663 1.0000 1.0000
3.1543 5.8309 14000 3.2640 1.0000 1.0000
3.1311 6.2474 15000 3.2485 1.0000 1.0000
3.1059 6.6639 16000 3.2373 1.0000 1.0000
3.1087 7.0804 17000 3.2766 1.0000 1.0000
3.1429 7.4969 18000 3.2375 1.0000 1.0000
3.104 7.9134 19000 3.2643 1.0000 1.0000
3.1288 8.3299 20000 3.2411 1.0000 1.0000
3.1246 8.7464 21000 3.2481 1.0000 1.0000
3.0808 9.1628 22000 3.2583 1.0000 1.0000
3.0996 9.5793 23000 3.2445 1.0000 1.0000
3.0871 9.9958 24000 3.2660 1.0000 1.0000
3.0789 10.4123 25000 3.2566 1.0000 1.0000
3.0828 10.8288 26000 3.2379 1.0000 1.0000
3.1161 11.2453 27000 3.2458 1.0000 1.0000
3.0719 11.6618 28000 3.2473 1.0000 1.0000
3.0838 12.0783 29000 3.2451 1.0000 1.0000
3.0758 12.4948 30000 3.2472 1.0000 1.0000

Framework versions

  • Transformers 4.52.3
  • Pytorch 2.7.0+cu118
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
3
Safetensors
Model size
315M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for HouraMor/wav2vec2-ft-lre5-adm-ga2b16-st30k-v2

Finetuned
(13)
this model