|
--- |
|
tags: |
|
- audio |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pretty_name: NonverbalTTS |
|
size_categories: |
|
- 1K<n<10K |
|
|
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: default/train/** |
|
- split: dev |
|
path: default/dev/** |
|
- split: test |
|
path: default/test/** |
|
- split: other |
|
path: default/other/** |
|
|
|
--- |
|
# NonverbalTTS Dataset π΅π£οΈ |
|
|
|
[](https://arxiv.org/abs/2507.13155) |
|
[](https://huggingface.co/datasets/deepvk/NonverbalTTS) |
|
|
|
**NonverbalTTS** is a 17-hour open-access English speech corpus with aligned text annotations for **nonverbal vocalizations (NVs)** and **emotional categories**, designed to advance expressive text-to-speech (TTS) research. |
|
|
|
## Key Features β¨ |
|
|
|
- **17 hours** of high-quality speech data |
|
- **10 NV types**: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing |
|
- **8 emotion categories**: Angry, disgusted, fearful, happy, neutral, sad, surprised, other |
|
- **Diverse speakers**: 2296 speakers (60% male, 40% female) |
|
- **Multi-source**: Derived from [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/) and [Expresso](https://speechbot.github.io/expresso/) corpora |
|
- **Rich metadata**: Emotion labels, NV annotations, speaker IDs, audio quality metrics |
|
- **Sampling rate**: 16kHz for audio from VoxCeleb, 48kHz for audio from Expresso |
|
<!-- ## Dataset Structure π |
|
|
|
|
|
|
|
NonverbalTTS/ |
|
βββ wavs/ # Audio files (16-48kHz WAV format) |
|
β βββ ex01_sad_00265.wav |
|
β βββ ... |
|
βββ .gitattributes |
|
βββ README.md |
|
βββ metadata.csv # Metadata annotations --> |
|
|
|
|
|
<!-- ## Metadata Schema (`metadata.csv`) π |
|
|
|
| Column | Description | Example | |
|
|--------|-------------|---------| |
|
| `index` | Unique sample ID | `ex01_sad_00265` | |
|
| `file_name` | Audio file path | `wavs/ex01_sad_00265.wav` | |
|
| `Emotion` | Emotion label | `sad` | |
|
| `Initial text` | Raw transcription | `"So, Mom, π¬οΈ how've you been?"` | |
|
| `Annotator response {1,2,3}` | Refined transcriptions | `"So, Mom, how've you been?"` | |
|
| `Result` | Final fused transcription | `"So, Mom, π¬οΈ how've you been?"` | |
|
| `dnsmos` | Audio quality score (1-5) | `3.936982` | |
|
| `duration` | Audio length (seconds) | `3.6338125` | |
|
| `speaker_id` | Speaker identifier | `ex01` | |
|
| `data_name` | Source corpus | `Expresso` | |
|
| `gender` | Speaker gender | `m` | --> |
|
|
|
<!-- **NV Symbols**: π¬οΈ=Breath, π=Laughter, etc. (See [Annotation Guidelines](https://zenodo.org/records/15274617)) --> |
|
|
|
## Loading the Dataset π» |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("deepvk/NonverbalTTS") |
|
``` |
|
|
|
<!-- # Access train split |
|
```print(dataset["train"][0])``` |
|
|
|
# Output: {'index': 'ex01_sad_00265', 'file_name': 'wavs/ex01_sad_00265.wav', ...} |
|
--> |
|
|
|
## Annotation Pipeline π§ |
|
|
|
1. **Automatic Detection** |
|
- NV detection using [BEATs](https://arxiv.org/abs/2409.09546) |
|
- Emotion classification with [emotion2vec+](https://huggingface.co/emotion2vec/emotion2vec_plus_large) |
|
- ASR transcription via Canary model |
|
|
|
2. **Human Validation** |
|
- 3 annotators per sample |
|
- Filtered non-English/multi-speaker clips |
|
- NV/emotion validation and refinement |
|
|
|
3. **Fusion Algorithm** |
|
- Majority voting for final transcriptions |
|
- Pyalign-based sequence alignment |
|
- Multi-annotator hypothesis merging |
|
|
|
|
|
## Benchmark Results π |
|
|
|
|
|
Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems: |
|
|Metric | NVTTS | CosyVoice2 | |
|
| ------- | ------- | ------- | |
|
|Speaker Similarity | 0.89 | 0.85 | |
|
|NV Jaccard | 0.8 | 0.78 | |
|
|Human Preference | 33.4% | 35.4% | |
|
|
|
|
|
## Use Cases π‘ |
|
- Training expressive TTS models |
|
- Zero-shot NV synthesis |
|
- Emotion-aware speech generation |
|
- Prosody modeling research |
|
|
|
## License π |
|
- Annotations: CC BY-NC-SA 4.0 |
|
- Audio: Adheres to original source licenses (VoxCeleb, Expresso) |
|
|
|
|
|
<!-- ## Citation π |
|
|
|
TODO --> |
|
<!-- |
|
``` |
|
@dataset{nonverbaltts2024, |
|
author = {Borisov Maksim, Spirin Egor, Dyatlova Darya}, |
|
title = {NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech}, |
|
month = April, |
|
year = 2025, |
|
publisher = {Zenodo}, |
|
version = {1.0}, |
|
doi = {10.5281/zenodo.15274617}, |
|
url = {https://zenodo.org/records/15274617} |
|
} |
|
``` --> |