Introduction
VT-Orpheus-3B-TTS-lora-adapter is a Lora adapter fine-tuned from Orpheus-TTS.
Dataset is from https://huggingface.co/datasets/Jinsaryko/Ceylia.
Sample Audio
Check my setup guide for running the local Orpheus model with my Lora adapter.
python gguf_orpheus.py --text "Seriously? <giggle> That's the cutest thing I've ever heard ! " --voice ceylia
python gguf_orpheus.py --text "Hi! I'm Ceylia. <laugh> This is so exciting! <giggle>" --voice ceylia
python gguf_orpheus.py --text "Morning! <giggle> I finally finished that project last night. It took forever, but the results look amazing. <yawn> Sorry, still a bit tired from staying up so late." --voice ceylia
Running Locally
This section provides a step-by-step guide to running the VT-Orpheus-3B-TTS-Ceylia.Q4_K_M.gguf
model locally on your machine. There are two main methods to run this model:
Method 1: Using LM Studio (Recommended for beginners)
Prerequisites
- LM Studio installed on your computer
- Python 3.8+ installed
- The
VT-Orpheus-3B-TTS-Ceylia.Q4_K_M.gguf
model file
Setup Steps
- Install LM Studio
- Download and install LM Studio from lmstudio.ai
- Launch LM Studio
- Load the GGUF model
- In LM Studio, click "Add Model"
- Select the
VT-Orpheus-3B-TTS-Ceylia.Q4_K_M.gguf
file from your computer - Once added, click on the model to load it
- Start the local server
- Go to the "Local Server" tab in LM Studio
- Click "Start Server" to launch the local API server (default address is
http://127.0.0.1:1234
)
- Clone orpheus-tts-local repository
git clone https://github.com/isaiahbjork/orpheus-tts-local.git
cd orpheus-tts-local
- Install dependencies
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
5.1 Edit gguf_orpheus.py to include new ceylia voice
Open gguf_orpheus.py
file in ./orpheus-tts-local directory, find the line of AVAILABLE_VOICES
and DEFAULT_VOICE
and edit to include ceylia voice, default is tara
.
# Available voices based on the Orpheus-TTS repository
AVAILABLE_VOICES = ["tara", "leah", "jess", "leo", "dan", "mia", "zac", "zoe", "ceylia"]
DEFAULT_VOICE = "ceylia"
Save the file gguf_orpheus.py
.
- Run the model
python gguf_orpheus.py --text "Hi! I'm Ceylia. <laugh> This is so exciting! <giggle>" --voice ceylia --output output.wav
Available Parameters
--text
: The text to convert to speech (required)--voice
: The voice to use (default is "tara", but use "ceylia" for this model)--output
: Output WAV file path (default: auto-generated filename)--temperature
: Temperature for generation (default: 0.6)--top_p
: Top-p sampling parameter (default: 0.9)--repetition_penalty
: Repetition penalty (default: 1.1)--backend
: Specify the backend (default: "lmstudio", also supports "ollama")
Method 2: Using llama.cpp directly
Prerequisites
- llama.cpp installed and built on your system
- The VT-Orpheus-3B-TTS-Ceylia.Q4_K_M.gguf model file
Setup Steps
- Clone and build llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build
cmake --build build --config Release
- Start the server
./llama-server -m /path/to/VT-Orpheus-3B-TTS-Ceylia.Q4_K_M.gguf --port 8080
- Clone orpheus-tts-local repository
git clone https://github.com/isaiahbjork/orpheus-tts-local.git
cd orpheus-tts-local
- Install dependencies
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
- Run the model with custom API URL
python gguf_orpheus.py --text "Hi! I'm Ceylia. <laugh> Let's play! <sniffle> This is so exciting! <giggle>" --voice ceylia --output output.wav --api_url http://localhost:8080/v1
Emotion Tags
You can add emotion to the speech by including the following tags in your text:
<giggle>
<laugh>
<chuckle>
<sigh>
<cough>
<sniffle>
<groan>
<yawn>
<gasp>
Example:
python gguf_orpheus.py --text "Hi! I'm Ceylia. <laugh> This is so exciting! <giggle>" --voice ceylia
Troubleshooting
- Error connecting to server: Make sure LM Studio's server is running or llama.cpp server is running on the correct port
- Low-quality audio: Try adjusting the temperature (higher = more variance) or repetition_penalty (>1.1 recommended)
- Slow generation: Reduce model precision or run on a more powerful GPU if available
Uploaded model
- Developed by: vinhnx90
- License: apache-2.0
- Finetuned from model : unsloth/orpheus-3b-0.1-ft-unsloth-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 314
4-bit
Model tree for vinhnx90/VT-Orpheus-3B-TTS-Ceylia-Q4KM-GGUFF
Base model
meta-llama/Llama-3.2-3B-Instruct