--- library_name: transformers language: - si license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - Lingalingeswaran/asr-sinhala-dataset_json_v1 model-index: - name: Whisper Small sinhala - Lingalingeswaran results: [] pipeline_tag: automatic-speech-recognition --- # Whisper Small sinhala - Lingalingeswaran This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Lingalingeswaran/asr-sinhala-dataset_json_v1 dataset. ## Model description This Whisper model has been fine-tuned specifically for the Sinhala language using the Common Voice 11.0 dataset. It is designed to handle tasks such as speech-to-text transcription and language identification, making it suitable for applications where Sinhala is a primary language of interest. The fine-tuning process focused on enhancing performance for Sinhala, aiming to reduce the error rate in transcriptions and improve general accuracy. ## Intended uses & limitations Intended Uses: Speech-to-text transcription in Sinhala Limitations: May not perform as well on languages or dialects that are not well-represented in the Common Voice dataset. Higher Word Error Rate (WER) in noisy environments or with speakers who have heavy accents not covered in the training data. The model is optimized for Sinhala; performance in other languages may be suboptimal. ## Training and evaluation data The training data for this model consists of voice recordings in Sinhala from the Mozilla-foundation/Common Voice 11.0 dataset. The dataset is a crowd-sourced collection of transcribed speech, ensuring diversity in terms of speaker accents, age groups, and speech styles. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.48.1 - Pytorch 2.5.1+cu121 - Datasets 3.2.0 - Tokenizers 0.21.0 ## Example Usage Here is an example of how to use the model for Sinhala speech recognition with Gradio: ```python import gradio as gr from transformers import pipeline # Initialize the pipeline with the specified model pipe = pipeline(model="Lingalingeswaran/whisper-small-sinhala") def transcribe(audio): # Transcribe the audio file to text text = pipe(audio)["text"] return text # Create the Gradio interface iface = gr.Interface( fn=transcribe, inputs=gr.Audio(sources=["microphone", "upload"], type="filepath"), outputs="text", title="Whisper Small Sinhala", description="Realtime demo for Sinhala speech recognition using a fine-tuned Whisper small model.", ) # Launch the interface if __name__ == "__main__": iface.launch()