Spanish-F5 TTS Inference API (Hugging Face)
This project exposes a Hugging Face Inference Endpoint for Spanish-F5, a Spanish-adapted version of the F5-TTS model. It takes reference audio and a target sentence, and synthesizes speech in the same voice.
β¨ Live inference is powered by Hugging Face Inference Endpoints.
π Credit
This project is based on jpgallegoar/Spanish-F5.
- Spanish-F5 by @jpgallegoar
- Model weights on Hugging Face
Addapted by @eloicito333.
Licensed under the MIT License.
βοΈ How It Works
π½ Request Parameters
Send a POST request with a JSON body to the Hugging Face Inference Endpoint:
{
"ref_audio": "<base64-encoded WAV>", // string, required
"ref_text": "Hola, ΒΏcΓ³mo estΓ‘s?", // string, optional (transcript of ref_audio)
"gen_text": "Estoy muy bien, gracias.", // string, required (text to synthesize)
"remove_silence": true, // boolean, optional (default: true)
"speed": 1.0, // number, optional (default: 1.0)
"cross_fade_duration": 0.15 // number, optional (default: 0.15)
}
πΌ Response Object
The response will be a JSON object:
{
"success": true, // boolean: true if synthesis succeeded
"audio_base64": "<base64-encoded WAV output>" // string: base64 WAV audio (if success)
}
If an error occurs:
{
"success": false,
"error": "TypeError: some descriptive message" // string: error description
}
Use the audio_base64
field to decode and save the resulting audio.
π€ Node.js Client Example (Using Fetch)
import "fs"
async function sendAudio() {
const audioBuffer = fs.readFileSync("./example.wav");
const audioBase64 = audioBuffer.toString("base64");
const response = await fetch("https://your-hf-endpoint-url", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
ref_audio: audioBase64,
ref_text: "Hola, ΒΏcΓ³mo estΓ‘s?",
gen_text: "Estoy muy bien, gracias.",
remove_silence: true,
speed: 1.0,
cross_fade_duration: 0.15,
})
});
const result = await response.json();
if (result.audio_base64) {
fs.writeFileSync("output.wav", Buffer.from(result.audio_base64, "base64"));
console.log("Audio saved to output.wav");
} else {
console.error("Error:", result);
}
}
sendAudio();
π¬ Python Client Example (Optional)
import requests
import base64
with open("ref.wav", "rb") as f:
audio_base64 = base64.b64encode(f.read()).decode("utf-8")
response = requests.post("https://your-hf-endpoint-url", json={
"ref_audio": audio_base64,
"ref_text": "Hola, ΒΏcΓ³mo estΓ‘s?",
"gen_text": "Estoy muy bien, gracias.",
"remove_silence": True,
"speed": 1.0,
"cross_fade_duration": 0.15
})
if response.ok and response.json().get("audio_base64"):
with open("output.wav", "wb") as out:
out.write(base64.b64decode(response.json()["audio_base64"]))
print("Audio saved to output.wav")
else:
print("Error:", response.json())
π License
MIT License. See LICENSE for more information.
βοΈ Author
Addapted by @eloicito333.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support