Whisper Small Spanish

This model is a fine-tuned version of openai/whisper-small on the mozilla-foundation/common_voice_13_0 es dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2212
  • Wer: 8.2668

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss Wer
0.132 2.0 1000 0.2461 9.5267
0.1288 4.01 2000 0.2251 8.5215
0.0814 6.01 3000 0.2212 8.2668
0.0905 8.01 4000 0.2310 8.4997
0.0319 10.02 5000 0.2358 8.5343

Framework versions

  • Transformers 4.33.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.13.3

Citation

If you use these models in your research, please cite:

@misc{dezuazo2025whisperlmimprovingasrmodels,
      title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages}, 
      author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
      year={2025},
      eprint={2503.23542},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2503.23542}, 
}

Please, check the related paper preprint in arXiv:2503.23542 for more details.

Licensing

This model is available under the Apache-2.0 License. You are free to use, modify, and distribute this model as long as you credit the original creators.

Downloads last month
18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zuazo/whisper-small-es

Finetuned
(2453)
this model
Finetunes
2 models

Dataset used to train zuazo/whisper-small-es

Evaluation results