license: mit
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: utterance
dtype: string
- name: landmarks
dtype: string
splits:
- name: train
num_bytes: 3491366369.548
num_examples: 115487
download_size: 2130707185
dataset_size: 3491366369.548
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
language: en
tags:
- audio
- speech-synthesis
- acoustic-landmarks
- phonetics
- text-to-speech
pretty_name: Pink Trombone English Phonetic & Landmark Dataset
Dataset Card for Pink Trombone English Phonetic & Landmark Dataset
Dataset Summary
This dataset contains audio samples of English words generated by the Pink Trombone, a popular open-source vocal tract synthesizer. The primary goal of this dataset is to provide a clean, large-scale resource linking phonetic sequences to both their acoustic realization and the underlying articulatory landmarks.
Each sample in the dataset corresponds to a word from the Oxford English Dictionary. For each word, the dataset provides:
- A synthesized audio file (
.wav
) at a 48kHz sampling rate. - The phonetic keyframes used by the synthesizer to generate the audio.
- A time-aligned sequence of acoustic landmarks extracted directly from the vocal tract model's state during synthesis. These landmarks are based on the theoretical framework developed by Professor Kenneth Stevens.
This resource is designed for research in speech synthesis, phonetics, automatic speech recognition (ASR), and especially for training and evaluating models for acoustic landmark detection.
Supported Tasks and Leaderboards
This dataset can be used for a variety of tasks:
text-to-speech
: The dataset provides a direct mapping from phonetic sequences to audio, which can be used to train TTS models.automatic-speech-recognition
: The synthesized audio can be used to train or augment ASR models, particularly for phoneme recognition.Acoustic Landmark Detection
(Primary Task): The core value of this dataset is the parallel landmark data. It can be used to train models that identify the location of crucial acoustic-phonetic events in a speech signal.
Languages
The audio data is based on English words and their corresponding phonemes, as found in the International Phonetic Alphabet (IPA). The language code is en
.
Dataset Structure
The dataset consists of a single split, train
, containing 346,463 samples.
Data Instances
A typical data instance looks like this:
{
"id": "basic",
"audio": {
"path": "basic.wav",
"array": [-0.00021362, -0.00045776, ..., 0.0001831],
"sampling_rate": 48000
},
"utterance": {
"name": "basic",
"keyframes": [
{ "isSubPhoneme": false, "intensity": 1, "frequency": 1, "phoneme": "b" },
{ "isSubPhoneme": false, "intensity": 1, "frequency": 1, "phoneme": "eɪ" },
{ "isSubPhoneme": false, "intensity": 1, "frequency": 1, "phoneme": "s" },
{ "isSubPhoneme": false, "intensity": 1, "frequency": 1, "phoneme": "ɪ" },
{ "isSubPhoneme": false, "intensity": 1, "frequency": 1, "phoneme": "k" }
]
},
"landmarks": [
{ "type": "Sc", "time": 0.1, "name": "b(0)" },
{ "type": "Sr", "time": 0.2, "name": "b(0)" },
{ "type": "Gc", "time": 0.3, "name": "eɪ(0)" },
{ "type": "Gr", "time": 0.4, "name": "eɪ(0)" }
]
}
(Note: The utterance
and landmarks
fields are stored as JSON-formatted strings in the dataset. The example above shows their parsed structure for clarity.)
Data Fields
id
(string): The orthographic representation of the generated word (e.g., "basic"). It corresponds to the base name of the source files.audio
(datasets.Audio): Adatasets.Audio
object containing the audio data. The audio is mono, sampled at 48kHz.utterance
(string): A JSON-formatted string describing the phonetic sequence used for synthesis.name
(string): The target word.keyframes
(list of objects): A sequence of phonemes and their associated synthesis parameters.
landmarks
(string): A JSON-formatted string containing a time-ordered list of acoustic landmarks detected during synthesis. Each landmark is an object with:type
(string): The landmark type (e.g.,Sc
for consonantal closure,Sr
for release,Gc
for glottal closure).time
(float): The timestamp of the landmark in seconds from the start of the audio.name
(string): The associated phoneme.
Dataset Creation
Source Data
The word list and their phonetic transcriptions were derived from the Oxford English Dictionary (OED). This list was processed to create a one-to-one mapping between English words and their canonical phonetic representations.
Generation Process
The generation process was fully automated:
- A phonetic sequence was generated for each word.
- This sequence was fed into the Pink Trombone web-based vocal tract synthesizer.
- The synthesizer generated the corresponding
.wav
audio file at a 48kHz sampling rate. - Simultaneously, the internal state of the vocal tract model was monitored to extract articulatory events.
Annotations
The landmark annotations are not human-labeled but are extracted directly from the synthesizer's state. This provides a perfectly aligned, noise-free ground truth based on the articulatory events defined by the synthesizer. The landmark definitions (Sc
, Sr
, Gc
, etc.) are based on the acoustic-phonetic theory of speech events proposed by Professor Kenneth N. Stevens. This theory posits that the speech signal is best described as a sequence of discrete acoustic landmarks corresponding to consonantal and vocalic gestures.
Considerations for Using the Data
- Synthetic Data: This is not human speech. While it is phonetically grounded, it lacks the natural prosody, variability, and coarticulation effects found in human recordings. It is ideal for studying the link between articulation and acoustics in a controlled environment.
- Pink Trombone's Voice: All samples are generated with the same voice characteristic of the Pink Trombone synthesizer.
Other Information
Citation
If you use this dataset in your research, please consider citing the foundational work on which the landmarks are based:
@book{stevens1998acoustic,
title={Acoustic Phonetics},
author={Stevens, Kenneth N.},
year={1998},
publisher={MIT press}
}
And please also cite this dataset repository.
Dataset Curators
(Mateo Cámara @ UPM & MIT) For questions or feedback, please open an issue in the dataset repository.