Spanish-F5 TTS Inference API (Hugging Face)

This project exposes a Hugging Face Inference Endpoint for Spanish-F5, a Spanish-adapted version of the F5-TTS model. It takes reference audio and a target sentence, and synthesizes speech in the same voice.

✨ Live inference is powered by Hugging Face Inference Endpoints.


πŸ”— Credit

This project is based on jpgallegoar/Spanish-F5.

Addapted by @eloicito333.

Licensed under the MIT License.


βš™οΈ How It Works

πŸ”½ Request Parameters

Send a POST request with a JSON body to the Hugging Face Inference Endpoint:

{
  "ref_audio": "<base64-encoded WAV>",               // string, required
  "ref_text": "Hola, ΒΏcΓ³mo estΓ‘s?",                 // string, optional (transcript of ref_audio)
  "gen_text": "Estoy muy bien, gracias.",           // string, required (text to synthesize)
  "remove_silence": true,                            // boolean, optional (default: true)
  "speed": 1.0,                                       // number, optional (default: 1.0)
  "cross_fade_duration": 0.15                        // number, optional (default: 0.15)
}

πŸ”Ό Response Object

The response will be a JSON object:

{
  "success": true,                                     // boolean: true if synthesis succeeded
  "audio_base64": "<base64-encoded WAV output>"       // string: base64 WAV audio (if success)
}

If an error occurs:

{
  "success": false,
  "error": "TypeError: some descriptive message"       // string: error description
}

Use the audio_base64 field to decode and save the resulting audio.


πŸ€– Node.js Client Example (Using Fetch)

import "fs"

async function sendAudio() {
  const audioBuffer = fs.readFileSync("./example.wav");
  const audioBase64 = audioBuffer.toString("base64");

  const response = await fetch("https://your-hf-endpoint-url", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      ref_audio: audioBase64,
      ref_text: "Hola, ΒΏcΓ³mo estΓ‘s?",
      gen_text: "Estoy muy bien, gracias.",
      remove_silence: true,
      speed: 1.0,
      cross_fade_duration: 0.15,
    })
  });

  const result = await response.json();

  if (result.audio_base64) {
    fs.writeFileSync("output.wav", Buffer.from(result.audio_base64, "base64"));
    console.log("Audio saved to output.wav");
  } else {
    console.error("Error:", result);
  }
}

sendAudio();

πŸ”¬ Python Client Example (Optional)

import requests
import base64

with open("ref.wav", "rb") as f:
    audio_base64 = base64.b64encode(f.read()).decode("utf-8")

response = requests.post("https://your-hf-endpoint-url", json={
    "ref_audio": audio_base64,
    "ref_text": "Hola, ΒΏcΓ³mo estΓ‘s?",
    "gen_text": "Estoy muy bien, gracias.",
    "remove_silence": True,
    "speed": 1.0,
    "cross_fade_duration": 0.15
})

if response.ok and response.json().get("audio_base64"):
    with open("output.wav", "wb") as out:
        out.write(base64.b64decode(response.json()["audio_base64"]))
    print("Audio saved to output.wav")
else:
    print("Error:", response.json())

πŸŽ“ License

MIT License. See LICENSE for more information.


✏️ Author

Addapted by @eloicito333.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support