SpeechOceanNoTH / README.md
SanderGi's picture
Update README.md
cf7bcf7 verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: ipa
      dtype: string
    - name: text
      dtype: string
    - name: speaker_code
      dtype: string
    - name: speaker_gender
      dtype: string
    - name: speaker_age
      dtype: int64
    - name: pronunciation_accuracy_0_to_10
      dtype: int64
    - name: pronunciation_completeness_fraction
      dtype: float64
    - name: pauseless_flow_0_to_10
      dtype: int64
    - name: cadence_and_intonation_0_to_10
      dtype: int64
  splits:
    - name: train
      num_bytes: 137594253
      num_examples: 1120
    - name: test
      num_bytes: 130964304.625
      num_examples: 1123
  download_size: 253937800
  dataset_size: 268558557.625
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
license: apache-2.0
task_categories:
  - automatic-speech-recognition
language:
  - en
tags:
  - Speech
  - IPA
  - Mandarin
pretty_name: speechocean762 filtered
size_categories:
  - 1K<n<10K

speechocean762

speechocean762 is a speech dataset of native Mandarin speakers (50% adult, 50% children) speaking English. It contains phonemic annotations using the sounds supported by ARPABet. It was developed by Junbo Zhang et al. Read more on their official github, Hugging Face dataset, and paper.

This Processed Version

We have processed the dataset into an easily consumable Hugging Face dataset using this data processing script. This maps the phoneme annotations to IPA as supported by libraries like ipapy and panphon. We filter out samples with uncertain sound labels, unknown labels, and heavy accents for use in Speech-to-IPA modeling as opposed to pronunciation scoring which the original dataset should be used for.

Note: we have another version of this dataset. The difference is that in this filtered version, we have removed samples where a different set of 2 annotators disagreed with the labels in the original.

  • The train set has 1120 samples (around 72 minutes of speech).
  • The test set has 1123 samples (around 68 minutes of speech).

All audio has been converted to float32 in the -1 to 1 range at 16 kHz sampling rate.

Usage

  1. Request access to this dataset on the Hugging Face website. You will be automatically approved upon accepting the terms.
  2. pip install datasets
  3. Login to Hugging Face using huggingface-cli login with a token that has gated read access.
  4. Use the dataset in your scripts:
from datasets import load_dataset

dataset = load_dataset("KoelLabs/SpeechOceanNoTH")
train_ds = dataset['train']
test_ds = dataset['test']

sample = train_ds[0]
print(sample)

License

The original dataset is released under the Apache 2.0 license, a summary of the license can be found here, and the full license can be found here. This processed dataset follows the same license.