kazeric's picture
End of training
5ae2474 verified
metadata
library_name: transformers
language:
  - sw
license: apache-2.0
base_model: openai/whisper-small
tags:
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_13_0
metrics:
  - wer
model-index:
  - name: Whisper_Small_swahili_normal
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: Common Voice 13
          type: mozilla-foundation/common_voice_13_0
          config: default
          split: test
          args: default
        metrics:
          - name: Wer
            type: wer
            value: 5.506814977283409

Whisper_Small_swahili_normal

This model is a fine-tuned version of openai/whisper-small on the Common Voice 13 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0724
  • Wer Ortho: 5.5083
  • Wer: 5.5068

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_steps: 50
  • training_steps: 2000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Ortho Wer
0.1028 1.1220 500 0.1930 18.2956 18.2829
0.0067 3.116 1000 0.0871 7.6330 7.6218
0.0026 5.11 1500 0.0715 6.2008 6.2040
0.0007 7.104 2000 0.0724 5.5083 5.5068

Framework versions

  • Transformers 4.50.2
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1