--- tags: - audio license: apache-2.0 language: - en pretty_name: NonverbalTTS size_categories: - 1K ## Metadata Schema (`metadata.csv`) 📋 | Column | Description | Example | |--------|-------------|---------| | `index` | Unique sample ID | `ex01_sad_00265` | | `file_name` | Audio file path | `wavs/ex01_sad_00265.wav` | | `Emotion` | Emotion label | `sad` | | `Initial text` | Raw transcription | `"So, Mom, 🌬️ how've you been?"` | | `Annotator response {1,2,3}` | Refined transcriptions | `"So, Mom, how've you been?"` | | `Result` | Final fused transcription | `"So, Mom, 🌬️ how've you been?"` | | `dnsmos` | Audio quality score (1-5) | `3.936982` | | `duration` | Audio length (seconds) | `3.6338125` | | `speaker_id` | Speaker identifier | `ex01` | | `data_name` | Source corpus | `Expresso` | | `gender` | Speaker gender | `m` | **NV Symbols**: 🌬️=Breath, 😂=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617)) ## Loading the Dataset 💻 ```python from datasets import load_dataset dataset = load_dataset("deepvk/NonverbalTTS") ``` ## Annotation Pipeline 🔧 1. **Automatic Detection** - NV detection using [BEATs](https://arxiv.org/abs/2409.09546) - Emotion classification with [emotion2vec+](https://huggingface.co/emotion2vec/emotion2vec_plus_large) - ASR transcription via Canary model 2. **Human Validation** - 3 annotators per sample - Filtered non-English/multi-speaker clips - NV/emotion validation and refinement 3. **Fusion Algorithm** - Majority voting for final transcriptions - Pyalign-based sequence alignment - Multi-annotator hypothesis merging ## Benchmark Results 📊 Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems: |Metric | NVTTS | CosyVoice2 | | ------- | ------- | ------- | |Speaker Similarity | 0.89 | 0.85 | |NV Jaccard | 0.8 | 0.78 | |Human Preference | 33.4% | 35.4% | ## Use Cases 💡 - Training expressive TTS models - Zero-shot NV synthesis - Emotion-aware speech generation - Prosody modeling research ## License 📜 - Annotations: CC BY-NC-SA 4.0 - Audio: Adheres to original source licenses (VoxCeleb, Expresso) ## Citation 📝 ``` @dataset{nonverbaltts2024, author = {Borisov Maksim, Spirin Egor, Dyatlova Darya}, title = {NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech}, month = April, year = 2025, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.15274617}, url = {https://zenodo.org/records/15274617} } ```