--- datasets: - facebook/multilingual_librispeech - Parlament-Parla-v1 - gttsehu/basque_parliament_1 - facebook/voxpopuli - johnatanebonilla/coser_lv_full - collectivat/tv3_parla - mozilla-foundation/common_voice_16_0 language: - es - ca metrics: - wer - cer tags: - automatic-speech-recognition - speech - multilingual - nemo model-index: - name: Mohammed-Alzahrani-ai/stt_ca-es_conformer_transducer_large_fine_tuned results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: type: automatic-speech-recognition name: Combined (Parlament-Parla-v1, MLS, Voxpopuli, etc.) metrics: - name: WER (Spanish) type: wer value: 0.08 - name: CER (Spanish) type: cer value: 0.04 - name: WER (Catalan) type: wer value: 0.10 - name: CER (Catalan) type: cer value: 0.05 --- # NVIDIA Conformer-Transducer Large (ca-es) ## Table of Contents
Click to expand - [Model Description](#model-description) - [Intended Uses and Limitations](#intended-uses-and-limitations) - [How to Get Started with the Model](#how-to-get-started-with-the-model) - [Training Details](#training-details) - [Citation](#citation) - [Additional Information](#additional-information)
## Summary The "stt_ca-es_conformer_transducer_large" is an acoustic model based on ["NVIDIA/stt_es_conformer_transducer_large"](https://huggingface.co/nvidia/stt_es_conformer_transducer_large/) suitable for Bilingual Catalan-Spanish Automatic Speech Recognition. ## Model Description This model transcribes speech, and was fine-tuned on a Bilingual ca-es dataset comprising of 4000 hours. It is a "large" variant of Conformer-Transducer, with around 120 million parameters. We expaneded it is tokenizer vocab sise to be 5.5k t oinclude lowercase, uppercase, and punctuation See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#conformer-transducer) for complete architecture details. ## Intended Uses and Limitations This model can be used for Automatic Speech Recognition (ASR) in Catalan and Spanish. It is intended to transcribe audio files in Catalan and Spanish to plain text with punctuation. ### Installation To use this model, install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed the latest PyTorch version. ``` pip install nemo_toolkit['all'] ``` ### For Inference To transcribe audio in Catalan or in Spanish using this model, you can follow this example: ```python import nemo.collections.asr as nemo_asr nemo_asr_model = nemo_asr.models.EncDecRNNTBPEModel.restore_from(model) transcription = nemo_asr_model.transcribe([audio_path])[0].text print(transcription) ``` ## Training Details ### Training data The model was fine-tuned on bilingual datasets in Catalan and Spanish, for a total of 4k hours. Including: - [Parlament-Parla-v1](https://openslr.org/59/) - [multilingual_librispeech](https://huggingface.co/datasets/facebook/multilingual_librispeech) - [basque_parliament_1](https://huggingface.co/datasets/gttsehu/basque_parliament_1) - [Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli) (The datasets will be made accessible shortly.) - [Coser](https://huggingface.co/datasets/johnatanebonilla/coser) - [tv3_parla](https://huggingface.co/datasets/collectivat/tv3_parla) - [common_voice_16_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_0) ### Training procedure This model is the result of finetuning the model ["projecte-aina/stt_ca-es_conformer_transducer_large"](https://huggingface.co/projecte-aina/stt_ca-es_conformer_transducer_large) ### Results **Spanish WER:** 0.08 **Catalan WER:** 0.10 **Spanish CER:** 0.04 **Catalan CER:** 0.05