Finetuned openai/whisper-small on 3620 Greek training audio samples from mozilla-foundation/common_voice_17_0.

This model was created from the Mozilla.ai Blueprint: speech-to-text-finetune.

Evaluation results on 1701 audio samples of Greek:

Baseline model (before finetuning) on Greek

  • Word Error Rate: 46.392
  • Loss: 0.902

Finetuned model (after finetuning) on Greek

  • Word Error Rate: 45.632
  • Loss: 0.869
Downloads last month
48
Safetensors
Model size
242M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for mozilla-ai/whisper-small-el

Finetuned
(2325)
this model

Dataset used to train mozilla-ai/whisper-small-el

Collection including mozilla-ai/whisper-small-el

Evaluation results