Datasets:
dataset_info:
features:
- name: audio
dtype: audio
- name: ipa
dtype: string
- name: text
dtype: string
- name: speaker_code
dtype: string
- name: speaker_gender
dtype: string
- name: speaker_age
dtype: int64
- name: pronunciation_accuracy_0_to_10
dtype: int64
- name: pronunciation_completeness_fraction
dtype: float64
- name: pauseless_flow_0_to_10
dtype: int64
- name: cadence_and_intonation_0_to_10
dtype: int64
splits:
- name: train
num_bytes: 327613766.875
num_examples: 2481
- name: test
num_bytes: 307294798.125
num_examples: 2479
download_size: 603135007
dataset_size: 634908565
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- Speech
- IPA
- Mandarin
pretty_name: speechocean762
size_categories:
- 1K<n<10K
speechocean762
speechocean762 is a speech dataset of native Mandarin speakers (50% adult, 50% children) speaking English. It contains phonemic annotations using the sounds supported by ARPABet. It was developed by Junbo Zhang et al. Read more on their official github, Hugging Face dataset, and paper.
This Processed Version
We have processed the dataset into an easily consumable Hugging Face dataset using this data processing script. This maps the phoneme annotations to IPA as supported by libraries like ipapy and panphon. We filter out samples with uncertain sound labels, unknown labels, and heavy accents for use in Speech-to-IPA modeling as opposed to pronunciation scoring which the original dataset should be used for.
- The train set has 2481 samples (around 170 minutes of speech).
- The test set has 2479 samples (around 160 minutes of speech).
All audio has been converted to float32 in the -1 to 1 range at 16 kHz sampling rate.
Usage
- Request access to this dataset on the Hugging Face website. You will be automatically approved upon accepting the terms.
pip install datasets
- Login to Hugging Face using
huggingface-cli login
with a token that has gated read access. - Use the dataset in your scripts:
from datasets import load_dataset
dataset = load_dataset("KoelLabs/SpeechOcean")
train_ds = dataset['train']
test_ds = dataset['test']
sample = train_ds[0]
print(sample)
License
The original dataset is released under the Apache 2.0 license, a summary of the license can be found here, and the full license can be found here. This processed dataset follows the same license.