Datasets:

Modalities:
Audio
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
audio
Libraries:
Datasets
Dask
License:
NonverbalTTS / README.md
BorisovMaksim's picture
Update README.md
bf48210 verified
|
raw
history blame
4.55 kB
metadata
tags:
  - audio
license: apache-2.0
language:
  - en
pretty_name: NonverbalTTS
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train/**
      - split: dev
        path: data/dev/**
      - split: test
        path: data/test/**
      - split: other
        path: data/other/**

NonverbalTTS Dataset πŸŽ΅πŸ—£οΈ

DOI Hugging Face

NonverbalTTS is a 17-hour open-access English speech corpus with aligned text annotations for nonverbal vocalizations (NVs) and emotional categories, designed to advance expressive text-to-speech (TTS) research.

Key Features ✨

  • 17 hours of high-quality speech data
  • 10 NV types: Breathing, laughter, sighing, sneezing, coughing, throat clearing, groaning, grunting, snoring, sniffing
  • 8 emotion categories: Angry, disgusted, fearful, happy, neutral, sad, surprised, other
  • Diverse speakers: 2296 speakers (60% male, 40% female)
  • Multi-source: Derived from VoxCeleb and Expresso corpora
  • Rich metadata: Emotion labels, NV annotations, speaker IDs, audio quality metrics
  • Sampling rate: 16kHz sampling rate for audio from VoxCeleb, 48kHz for audio from Expresso

    Loading the Dataset πŸ’»

    from datasets import load_dataset
    
    dataset = load_dataset("deepvk/NonverbalTTS", revision="refs/convert/parquet")
    

    Annotation Pipeline πŸ”§

    1. Automatic Detection

      • NV detection using BEATs
      • Emotion classification with emotion2vec+
      • ASR transcription via Canary model
    2. Human Validation

      • 3 annotators per sample
      • Filtered non-English/multi-speaker clips
      • NV/emotion validation and refinement
    3. Fusion Algorithm

      • Majority voting for final transcriptions
      • Pyalign-based sequence alignment
      • Multi-annotator hypothesis merging

    Benchmark Results πŸ“Š

    Fine-tuning CosyVoice-300M on NonverbalTTS achieves parity with state-of-the-art proprietary systems:

    Metric NVTTS CosyVoice2
    Speaker Similarity 0.89 0.85
    NV Jaccard 0.8 0.78
    Human Preference 33.4% 35.4%

    Use Cases πŸ’‘

    • Training expressive TTS models
    • Zero-shot NV synthesis
    • Emotion-aware speech generation
    • Prosody modeling research

    License πŸ“œ

    • Annotations: CC BY-NC-SA 4.0
    • Audio: Adheres to original source licenses (VoxCeleb, Expresso)

    Citation πŸ“

    TODO