--- license: cc-by-nc-sa-4.0 task_categories: - text-to-speech - automatic-speech-recognition language: - zh --- # NVSpeech Dataset ## Overview The NVSpeech dataset provides extensive annotations of paralinguistic vocalizations for Mandarin Chinese speech, aimed at enhancing the capabilities of automatic speech recognition (ASR) and text-to-speech (TTS) systems. The dataset features explicit word-level annotations for 18 categories of paralinguistic vocalizations, including non-verbal sounds like laughter and breathing, as well as lexicalized interjections like "uhm" and "oh." ## Dataset Description * **NVSpeech**: An automatically annotated larger subset consisting of 174,179 utterances (573.4 hours of speech). Annotations in this set are generated by a state-of-the-art paralinguistic-aware ASR model, ensuring scalability and diversity for robust model training. ## Annotation Categories The NVSpeech dataset includes annotations for the following paralinguistic vocalization categories: * [Breathing] * [Laughter] * [Cough] * [Sigh] * [Confirmation-en] * [Question-en] * [Question-ah] * [Question-oh] * [Surprise-ah] * [Surprise-oh] * [Dissatisfaction-hnn] * [Uhm] * [Shh] * [Crying] * [Surprise-wa] * [Surprise-yo] * [Question-ei] * [Question-yi] ## Usage ```py from datasets import load_dataset dataset = load_dataset("Hannie0813/NVSpeech170k") ``` ### Intended Use NVSpeech is designed to facilitate: * Training and evaluation of paralinguistic-aware speech recognition models. * Development of expressive and controllable TTS systems that can accurately synthesize human-like speech with inline paralinguistic cues. ### Tasks * Automatic Speech Recognition (ASR) * Text-to-Speech (TTS) Synthesis * Paralinguistic Tagging ## Languages * Mandarin Chinese ## Dataset Structure * **Format**: Audio (WAV format) paired with text annotations including inline paralinguistic tokens. * **Size**: 174,179 automatically annotated utterances, totaling over 573 hours. ## License NVSpeech dataset is available for research use under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license. ## Citation If you use NVSpeech in your research, please cite: ```bibtex ``` ## Contact For further questions, please visit the [project webpage](https://nvspeech.github.io/) or contact the authors through the provided channels.