# NonverbalTTS Dataset 🎵🗣️ [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.15274617.svg)](https://doi.org/10.5281/zenodo.15274617) [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-blue)](https://huggingface.co/datasets/deepvk/NonverbalTTS) **NonverbalTTS** is a 17-hour open-access English speech corpus with aligned text annotations for **nonverbal vocalizations (NVs)** and **emotional categories**, designed to advance expressive text-to-speech (TTS) research. ## Key Features ✨ - **17 hours** of high-quality speech data - **10 NV types**: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing - **8 emotion categories**: Angry, disgusted, fearful, happy, neutral, sad, surprised, other - **Diverse speakers**: 2296 speakers (60% male, 40% female) - **Multi-source**: Derived from [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) and [Expresso](https://speechbot.github.io/expresso/) corpora - **Rich metadata**: Emotion labels, NV annotations, speaker IDs, audio quality metrics ## Metadata Schema (`metadata.csv`) 📋 | Column | Description | Example | |--------|-------------|---------| | `index` | Unique sample ID | `ex01_sad_00265` | | `file_name` | Audio file path | `wavs/ex01_sad_00265.wav` | | `Emotion` | Emotion label | `sad` | | `Initial text` | Raw transcription | `"So, Mom, 🌬️ how've you been?"` | | `Annotator response {1,2,3}` | Refined transcriptions | `"So, Mom, how've you been?"` | | `Result` | Final fused transcription | `"So, Mom, 🌬️ how've you been?"` | | `dnsmos` | Audio quality score (1-5) | `3.936982` | | `duration` | Audio length (seconds) | `3.6338125` | | `speaker_id` | Speaker identifier | `ex01` | | `data_name` | Source corpus | `Expresso` | | `gender` | Speaker gender | `m` | **NV Symbols**: 🌬️=Breath, 😂=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617)) ## Loading the Dataset 💻 ```python from datasets import load_dataset dataset = load_dataset("deepvk/NonverbalTTS") ``` ## Annotation Pipeline 🔧 1. **Automatic Detection** - NV detection using [BEATs](https://arxiv.org/abs/2409.09546) - Emotion classification with [emotion2vec+](https://arxiv.org/abs/2402.XXX) - ASR transcription via Canary model 2. **Human Validation** - 3 annotators per sample - Filtered non-English/multi-speaker clips - NV/emotion validation and refinement 3. **Fusion Algorithm** - Majority voting for final transcriptions - Pyalign-based sequence alignment - Multi-annotator hypothesis merging ## Benchmark Results 📊 Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems: |Metric | NVTTS | CosyVoice2 | | ------- | ------- | ------- | |Speaker Similarity | 0.89 | 0.85 | |NV Jaccard (Laugh) | 0.92 | 0.74 | |Human Preference | 33.4% | 35.4% | ## Use Cases 💡 - Training expressive TTS models - Zero-shot NV synthesis - Emotion-aware speech generation - Prosody modeling research ## License 📜 - Annotations: CC BY-NC-SA 4.0 - Audio: Adheres to original source licenses (VoxCeleb, Expresso) ## Citation 📝 ``` @dataset{nonverbaltts2024, author = {Anonymous}, title = {NonverbalTTS Dataset}, month = December, year = 2024, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.15274617}, url = {https://zenodo.org/records/15274617} } ```