--- language: - hi tags: - text-to-speech - tts - audio - speech-synthesis - orpheus - gguf license: apache-2.0 datasets: - internal --- # Orpheus-3b-FT-Q8_0 This is a quantised version of [canopylabs/3b-hi-ft-research_release](https://huggingface.co/canopylabs/3b-hi-ft-research_release). Orpheus is a high-performance Text-to-Speech model fine-tuned for natural, emotional speech synthesis. This repository hosts the 8-bit quantised version of the 3B parameter model, optimised for efficiency while maintaining high-quality output. ## Model Description **Orpheus-3b-FT-Q8_0** is a 3 billion parameter Text-to-Speech model that converts text inputs into natural-sounding speech with support for multiple voices and emotional expressions. The model has been quantised to 8-bit (Q8_0) format for efficient inference, making it accessible on consumer hardware. Key features: - 1 distinct voice option with different characteristics - Support for emotion tags like laughter, sighs, etc. - Optimised for CUDA acceleration on RTX GPUs - Produces high-quality 24kHz mono audio - Fine-tuned for conversational naturalness ## How to Use This model is designed to be used with an LLM inference server that connects to the [Orpheus-FastAPI](https://github.com/Lex-au/Orpheus-FastAPI) frontend, which provides both a web UI and OpenAI-compatible API endpoints. ### Compatible Inference Servers This quantised model can be loaded into any of these LLM inference servers: - [GPUStack](https://github.com/gpustack/gpustack) - GPU optimised LLM inference server (My pick) - supports LAN/WAN tensor split parallelisation - [LM Studio](https://lmstudio.ai/) - Load the GGUF model and start the local server - [llama.cpp server](https://github.com/ggerganov/llama.cpp) - Run with the appropriate model parameters - Any compatible OpenAI API-compatible server ### Quick Start 1. Download this quantised model from [lex-au's Orpheus-FASTAPI collection](https://huggingface.co/collections/lex-au/orpheus-fastapi-67e125ae03fc96dae0517707) 2. Load the model in your preferred inference server and start the server. 3. Clone the Orpheus-FastAPI repository: ```bash git clone https://github.com/Lex-au/Orpheus-FastAPI.git cd Orpheus-FastAPI ``` 4. Configure the FastAPI server to connect to your inference server by setting the `ORPHEUS_API_URL` environment variable. 5. Follow the complete installation and setup instructions in the [repository README](https://github.com/Lex-au/Orpheus-FastAPI). ### Available Voices The model supports 1 voice: - `ऋतिका`: Female, Hindi, expressive ### Emotion Tags You can add expressiveness to speech by inserting tags: - ``, ``: For laughter sounds - ``: For sighing sounds - ``, ``: For subtle interruptions - ``, ``, ``: For additional emotional expression ## Technical Specifications - **Architecture**: Specialised token-to-audio sequence model - **Parameters**: ~3 billion - **Quantisation**: 8-bit (GGUF Q8_0 format) - **Audio Sample Rate**: 24kHz - **Input**: Text with optional voice selection and emotion tags - **Output**: High-quality WAV audio - **Language**: Hindi - **Hardware Requirements**: CUDA-compatible GPU (recommended: RTX series) - **Integration Method**: External LLM inference server + Orpheus-FastAPI frontend ## Limitations - Currently supports English text only - Best performance achieved on CUDA-compatible GPUs - Generation speed depends on GPU capability ## License This model is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). ## Citation & Attribution The original Orpheus model was created by Canopy Labs. This repository contains a quantised version optimised for use with the Orpheus-FastAPI server. If you use this quantised model in your research or applications, please cite: ``` @misc{orpheus-tts-2025, author = {Canopy Labs}, title = {Orpheus-3b-0.1-ft: Text-to-Speech Model}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/canopylabs/orpheus-3b-0.1-ft}} } @misc{orpheus-quantised-2025, author = {Lex-au}, title = {Orpheus-3b-FT-Q8_0: Quantised TTS Model with FastAPI Server}, note = {GGUF quantisation of canopylabs/orpheus-3b-0.1-ft}, year = {2025}, publisher = {HuggingFace}, howpublished = {\url{https://huggingface.co/lex-au/Orpheus-3b-FT-Q8_0.gguf}} } ```