You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

wav2vec2_xls_r_300m_BIG-C_Bemba_10hr_v4

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7362
  • Wer: 0.7282
  • Cer: 0.1958

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
9.2663 0.3268 200 3.6743 1.0 1.0
3.1749 0.6536 400 2.8927 1.0 1.0
2.7952 0.9804 600 2.3885 1.0 0.9155
1.4097 1.3072 800 0.8953 0.8535 0.2054
0.9475 1.6340 1000 0.7894 0.7777 0.1771
0.8738 1.9608 1200 0.7285 0.7626 0.1766
0.8183 2.2876 1400 0.9577 0.9257 0.3696
0.6813 2.6144 1600 0.6992 0.7143 0.1708
0.6788 2.9412 1800 0.6921 0.7035 0.1652
0.6277 3.2680 2000 0.8239 0.8301 0.2821
0.5931 3.5948 2200 0.7037 0.7125 0.1818
0.6129 3.9216 2400 0.8998 0.8762 0.3413
0.5626 4.2484 2600 0.7700 0.8201 0.2259
0.5221 4.5752 2800 0.7325 0.7485 0.1917
0.534 4.9020 3000 0.7069 0.7325 0.1892
0.464 5.2288 3200 0.7738 0.7416 0.2102
0.4342 5.5556 3400 0.6993 0.6657 0.1726
0.4615 5.8824 3600 0.7637 0.6915 0.2038
0.3996 6.2092 3800 0.8508 0.7535 0.2423
0.4066 6.5359 4000 0.8229 0.7200 0.2134
0.3671 6.8627 4200 0.8800 0.7616 0.2736
0.3431 7.1895 4400 0.7773 0.6975 0.1983
0.3071 7.5163 4600 0.7757 0.6982 0.1816
0.3387 7.8431 4800 0.8184 0.6978 0.1948
0.318 8.1699 5000 0.8252 0.7157 0.2000
0.2712 8.4967 5200 0.9077 0.7203 0.2043
0.2907 8.8235 5400 0.9530 0.7550 0.2316
0.2574 9.1503 5600 0.8278 0.6963 0.1926
0.2475 9.4771 5800 0.9854 0.7638 0.2411
0.2602 9.8039 6000 1.0509 0.7938 0.2395
0.2572 10.1307 6200 0.9754 0.7698 0.2340
0.2224 10.4575 6400 0.8636 0.7174 0.1987

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.2.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
315M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for asr-africa/wav2vec2_xls_r_300m_BIG-C_Bemba_10hr_v4

Finetuned
(656)
this model

Collection including asr-africa/wav2vec2_xls_r_300m_BIG-C_Bemba_10hr_v4