w2v-bert-2.0-chichewa_34h

This model is a fine-tuned version of facebook/w2v-bert-2.0 on the CLEAR-GLOBAL/CHICHEWA_34H - NA dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3389
  • Wer: 0.4045
  • Cer: 0.1148

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 100000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
0.4609 5.6197 1000 0.7327 0.6746 0.1953
0.1207 11.2366 2000 0.4130 0.4797 0.1341
0.1104 16.8563 3000 0.3404 0.4165 0.1182
0.0417 22.4732 4000 0.3389 0.4046 0.1149
0.0849 28.0901 5000 0.3593 0.3860 0.1110
0.0169 33.7099 6000 0.4053 0.3799 0.1086
0.0625 39.3268 7000 0.4394 0.3820 0.1103
0.0226 44.9465 8000 0.4477 0.3922 0.1099
0.0236 50.5634 9000 0.4660 0.3855 0.1101

Framework versions

  • Transformers 4.48.1
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
11
Safetensors
Model size
606M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for CLEAR-Global/w2v-bert-2.0-chichewa_34h

Finetuned
(302)
this model

Collection including CLEAR-Global/w2v-bert-2.0-chichewa_34h