Sabela: Nos Project's Galician TTS Model
Model description
This model was trained from scratch using the Coqui TTS Python library on the Sabela corpus of the dataset CRPIH_UVigo-GL-Voices.
A live inference demo can be found in our official page, here.
The model needs the Cotovia tool to work correctly. For installation and deployment please consult the Cotovía Preprocessor section.
Intended uses and limitations
You can use this model to generate synthetic speech in Galician.
How to use
Usage
Cotovía preprocessor
To generate fonectic transcriptions, the Cotovía tool is needed. The tool can be downloaded from the SourceForge website. The required debian packages are cotovia_0.5_amd64.deb
and cotovia-lang-gl_0.5_all.deb
, that can be installed with the following commands:
sudo dpkg -i cotovia_0.5_amd64.deb
sudo dpkg -i cotovia-lang-gl_0.5_all.deb
The tool can be used to generate the phonetic transcription of the text. The following command can be used to generate the phonetic transcription of a text string:
echo "Era unha avioneta... O piloto era pequeno, que se chega a ser dos grande, tómbate!" | cotovia -t -n -S | iconv -f iso88591 -t utf8
The output of the command is the phonetic transcription of the input text. This string may be used in the inference part, as shown next.
Required libraries:
pip install TTS
Synthesize a speech using python:
import tempfile
import numpy as np
import os
import json
from typing import Optional
from TTS.config import load_config
from TTS.utils.manage import ModelManager
from TTS.utils.synthesizer import Synthesizer
model_path = # Absolute path to the model checkpoint.pth
config_path = # Absolute path to the model config.json
text = "Text to synthetize"
synthesizer = Synthesizer(
model_path, config_path, None, None, None, None,
)
wavs = synthesizer.tts(text)
Training
Training Procedure
Data preparation
Hyperparameter
The model is based on VITS proposed by Kim et al. The following hyperparameters were set in the coqui framework.
Hyperparameter | Value |
---|---|
Model | vits |
Batch Size | 48 |
Eval Batch Size | 16 |
Mixed Precision | true |
Window Length | 1024 |
Hop Length | 256 |
FTT size | 1024 |
Num Mels | 80 |
Phonemizer | null |
Phoneme Lenguage | null |
Text Cleaners | null |
Formatter | nos_fonemas |
Optimizer | adam |
Adam betas | (0.8, 0.99) |
Adam eps | 1e-09 |
Adam weight decay | 0.01 |
Learning Rate Gen | 0.0002 |
Lr. schedurer Gen | ExponentialLR |
Lr. schedurer Gamma Gen | 0.999875 |
Learning Rate Disc | 0.0002 |
Lr. schedurer Disc | ExponentialLR |
Lr. schedurer Gamma Disc | 0.999875 |
The model was trained for 256275 steps.
The nos_fonemas formatter is a modification of the LJSpeech formatter with one extra column for the normalized input (extended numbers and acronyms).
Additional information
Authors
Alp Öktem, Carmen Magariños and Antonio Moscoso.
Contact information
For further information, send an email to [email protected]
Licensing Information
Funding
This research was funded by “The Nós project: Galician in the society and economy of Artificial Intelligence”, resulting from the agreement 2021-CP080 between the Xunta de Galicia and the University of Santiago de Compostela, and thanks to the Investigo program, within the National Recovery, Transformation and Resilience Plan, within the framework of the European Recovery Fund (NextGenerationEU).
Citation information
If you use this model, please cite as follows:
Öktem, Alp; Magariños, Carmen; Moscoso, Antonio. 2024. Nos_TTS-sabela-vits-phonemes. URL: https://huggingface.co/proxectonos/Nos_TTS-sabela-vits-phonemes
- Downloads last month
- 13