UniTTS
Overview
we introduce UniTTS and DistilCodec . DistilCodec is a single-codebook audio codec, which has 32768 codes, and the utilization of the codebook achieves nearly 100%. UniTTS leverages DistilCodec for audio discretization, while its backbone network adopts Qwen2.5-7B to model relationships between audio tokens.
Our main contributions are summarized as follows:
- DistilCodec: We propose a training methodology that enables the distillation of multi-codebook Neural Audio Codecs(NAC) into single-codebook NAC. Through this approach, we have developed DistilCodec - a single-codebook NAC containing 32,768 codes that achieves 100% utilization with balanced code distribution. Notably, DistilCodec employs universal audio data for training rather than being restricted to speech-specific datasets.
- UniTTS: We present UniTTS, a novel TTS system trained on QWen2.5-7B and DistilCodec. Leveraging DistilCodec's comprehensive audio modeling capabilities, UniTTS achieves end-to-end speech synthesis with full-spectrum audio input/output. The system demonstrates enhanced naturalness in emotional expressiveness compared to conventional TTS systems, particularly in capturing subtle prosodic variations and affective nuances during audio generation.
- Novel Audio Language Model Paradigm: We establish a dual-phase Audio Language Model (ALM) training framework, which comprises (i) Audio Perceptual Modeling (DistilCodec) focusing purely on acoustic discretization, and (ii) Audio Cognitive Modeling (UniTTS) implemented via pretraining (incorporating universal audio autoregressive tasks), supervised fine-tuning (evaluating text-audio interleaved prompts' impact), and alignment (employing direct preference optimization for speech refinement) - enabled by UniTTS's complete end-to-end integration within the LLM.
Training data distribution and application scope
The model architecture was augmented with cross-lingual text-speech paired datasets (English and Chinese) alongside text-associated instruction corpora during pretraining. Subsequent SFT and alignment phases systematically incorporated three datasets: text instructions dataset, long-CoT dataset, and Chinese TTS dataset. Consequently, the model demonstrates robust capabilities in text-based conversational, long-CoT conversational, and Chinese TTS.
The distribution of the pretraining training data is as follows:
Data Type | Data Size (B) |
---|---|
Text Data | 140 |
Text-Audio Alignment Data | 82 |
Audio Data | 100 |
Total | 322 |
The distribution of the sft training data is as follows:
Data Type | Number of Samples |
---|---|
Text Data | 181K |
Long-cot Dataset | 55K |
Chinese Text-Audio Alignment Data | 401K |
Total | 637K |
The distribution of the lpo training data is as follows:
Data Type | Number of Samples |
---|---|
General SFT Data | 100K |
Long-cot Dataset | 45K |
Chinese Text-Audio Alignment Data | 300K |
Total | 445K |
The proposed model supports the following capabilities
Application Type | Support Status |
---|---|
Text conversation | Supported |
Long-cot conversation | Supported |
Chinese TTS | Supported |
Install
Clone and Install
- Clone the repo
git clone [email protected]:IDEA-Emdoor-Lab/UniTTS.git
git clone [email protected]:IDEA-Emdoor-Lab/DistilCodec.git
cd UniTTS
- Installation environment
conda create -n unitts -y python=3.10
conda activate unitts
pip install -r requirements.txt
Model Download
Download via git clone:
mkdir -p pretrained_models
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# clone UniTTS model
git clone [email protected]:IDEA-Emdoor/UniTTS-mixed-v0.1
Inference Usage
TTS Inference Usage
#### Step 1: Init model
from cli.tokenizer import QWenTokenizer
from cli.tts_tool import enocde_audio, tts_prompt_ref_text
import soundfile as sf
import librosa
from vllm import LLM, SamplingParams
import sys
sys.path.append('../DistilCodec/') # set DistilCodec code path
from distil_codec import DistilCodec # type: ignore
#init model
model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
model_config="IDEA-Emdoor/UniTTS-mixed-v0.1/codec_config.json"
ckpt_config="IDEA-Emdoor/UniTTS-mixed-v0.1"
ref_audio_path='cli/ref.mp3'
ref_text='求求你,再给我一次机会,我保证不会让你失望……'
infer_text='天啊!这竟然是真的?我简直不敢相信!'
llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
codec = DistilCodec.from_pretrained(
config_path=model_config,
model_path=ckpt_config,
use_generator=True,
is_debug=False,
local_rank=0).eval()
tokenizer: QWenTokenizer = QWenTokenizer(model_name)
stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
#### Step 2: format prompt
ref_audio_text = enocde_audio(codec, tokenizer, ref_audio_path)
ref_audio_text = f'<|inter_audio_begin|>{ref_audio_text}<|inter_audio_end|>'
prompt = tts_prompt_ref_text.format(content=infer_text, example_voice=ref_audio_text, example_text=ref_text)
#### Step 3: inference speech token
sampling_params = SamplingParams(temperature=0.9, top_p=0.9, stop_token_ids=stop_ids, max_tokens=6000)
output = llm.generate([prompt], sampling_params)
#### step 4: decode speech token
output_dir='./' # save path
tokens = tokenizer.tokenizer.encode(output[0].outputs[0].text)[1: -2]
utt = 'infer'
y_gen = codec.decode_from_codes(
tokens,
minus_token_offset=True # if the 'plus_llm_offset' of method demo_for_generate_audio_codes is set to True, then minus_token_offset must be True.
)
codec.save_wav(
audio_gen_batch=y_gen,
nhop_lengths=[y_gen.shape[-1]],
save_path=output_dir,
name_tag=utt
)
Long-cot Inference Usage
#### Step 1: Init model
from cli.tokenizer import QWenTokenizer
from cli.tts_tool import enocde_audio, long_cot_prompt_template
from vllm import LLM, SamplingParams
#init model
model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
infer_text="给我写一首春天的作文"
llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
tokenizer: QWenTokenizer = QWenTokenizer(model_name)
stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
#### Step 2: format prompt
prompt = long_cot_prompt_template.format(question=infer_text)
#### Step 3: inference speech token
sampling_params = SamplingParams(temperature=0.8, top_p=0.8, stop_token_ids=stop_ids, max_tokens=6000)
output = llm.generate([prompt], sampling_params)
print(output[0].outputs[0].text)
Text conversation Inference Usage
#### Step 1: Init model
from cli.tokenizer import QWenTokenizer
from cli.tts_tool import enocde_audio, text_conversation_prompt_template
from vllm import LLM, SamplingParams
#init model
model_name="IDEA-Emdoor/UniTTS-mixed-v0.1"
infer_text="天空为什么是蓝色的?"
llm = LLM(model=model_name, dtype='auto', gpu_memory_utilization=0.8, seed=0)
tokenizer: QWenTokenizer = QWenTokenizer(model_name)
stop_tokens = ["<|endoftext|>", "<|endofaudio|>", "<|im_end|>"]
stop_ids = tokenizer.tokenizer.convert_tokens_to_ids(stop_tokens)
#### Step 2: format prompt
prompt = text_conversation_prompt_template.format(question=infer_text)
#### Step 3: inference speech token
sampling_params = SamplingParams(temperature=0.75, top_p=0.75, stop_token_ids=stop_ids, max_tokens=6000)
output = llm.generate([prompt], sampling_params)
print(output[0].outputs[0].text)
Citation
@misc{wang2025unittsendtoendttsdecoupling,
title={UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information},
author={Rui Wang and Qianguo Sun and Tianrong Chen and Zhiyun Zeng and Junlong Wu and Jiaxing Zhang},
year={2025},
eprint={2505.17426},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2505.17426},
}
Disclaimer
Our model provides zero-shot voice cloning only for academic research purposes. We encourage the community to uphold safety and ethical principles in AI research and applications.
Important Notes:
Compliance with the model's open-source license is mandatory.
Unauthorized voice replication applications are strictly prohibited.
Developers bear no responsibility for any misuse of this model.
License
UniTTS: An end-to-end TTS system without decoupling of acoustic and semantic information © 2025 by Rui Wang, Qianguo Sun, Tianrong Chen, Zhiyun Zeng, Junlong Wu, Jiaxing Zhang is licensed under CC BY-NC-ND 4.0
- Downloads last month
- 30