Edit model card

Model Description

This model is an end-to-end deep-learning-based Kinyarwanda Text-to-Speech (TTS). Due to its zero-shot learning capabilities, new voices can be introduced with 1min speech. The model was trained using the Coqui's TTS library, and the YourTTS[1] architecture. It was trained on 67 hours of Kinyarwanda bible data, for 100 epochs.

Data Sources

  • Audio data: [www.faithcomesbyhearing.com, version -> Common Language Version audio Old Testament]
  • Text data: [www.bible.com, version -> Bibiliya Ijambo ry'imana(BIR)(only the Old Testament was used)]

Usage

Install the Coqui's TTS library:

pip install TTS

Download the files from this repo, then run:

tts --text "text" --model_path best_model.pth --encoder_path SE_checkpoint.pth.tar --encoder_config_path config_se.json --config_path config.json --speakers_file_path speakers.pth --speaker_wav conditioning_audio.wav --out_path out.wav

Where the conditioning audio is a wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder, you can give multiple file paths. The d_vectors is computed as their average.

References

[1] YourTTS paper

Downloads last month
52
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.