Datasets:

Modalities:
Audio
Text
Formats:
text
ArXiv:
Libraries:
Datasets
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Mandarin Stuttered Speech Dataset (StammerTalk)

The StammerTalk dataset contains 43 hours of spontenous conversations and reading of voice commands by 66 Mandarin Chinese speakers who stutter.

Data Collection Process

The StammerTalk dataset was created by StammerTalk (口吃说) community (http://stammertalk.net/), in partnership with AImpower.org.

Speech data collection was conducted by two StammerTalk volunteers, who also stutter, with participants over videoconferencing platforms. The recorded speech contains both unscripted conversations between the volunteer and the participant, and the dictation of a list of 200 voice commands by the participant. Total 70 adults who stutter (AWS) participated in the recording with two StammerTalk volunteers, resulting in a dataset of 48.8 hours speech from 72 AWS. However, only 66 participants' data are shared in the public version of the dataset here, due to differentiating consents provided to and received from the participants.

The recorded speech was transcribed semantically and verbatim, with five distinct stuttering event annotations embedded in markups. Obtaining verbatim transcription that includes word repetitions (e.g. “My, my, my name”) and interjections (e.g. “hmm”) was a deliberate choice made by the StammerTalk community, to allow disfluencies rather than automatically erased by ASR models. The annotation was performed by professional speech data annotators, and reviewed by a StammerTalk volunteer.

More details on data collection process, as well as its community impact, can be found in our CSCW '24 and Interspeech 2024 papers:

  • Qisheng Li and Shaomei Wu. 2024. "I Want to Publicize My Stutter": Community-led Collection and Curation of Chinese Stuttered Speech Data. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 475 (November 2024), 27 pages. https://doi.org/10.1145/3687014 [pdf]
  • Rong Gong, Hongfei Xue, Lezhi Wang et al. 2024. AS-70: A Mandarin stuttered speech dataset for automatic speech recognition and stuttering event detection. Interspeech 2024 [pdf]

Data Annotation

The speech was manually annotated by professional speech annotation service providers, under the supervision of the StammerTalk community.

Both verbatim and semantic transcription were created, with embedded markups for five types of stutters specified by the annotation guidelines, including:

  • []: Word-level repetition. Repeated words or phrases.
  • /r: sound repetition. Repeated sounds, such as a consonant or vowel, that do not constitute an entire word.
  • /b: blocks. Prolonged blocks or unnatural silence.
  • /p: prolongation. Prolonged phonemes.
  • /i: interjection. Excessive utterances like ’嗯’ (hmm), ’啊’ (ah), or ’呃’ (um). Notably, natural sounding interjections that do not disrupt the speech flow are excluded from this category.

Example:

  • Annotation: 我叫[我叫/p]小/b明,我[我我]住/p/b在呃/i北/r京
  • Interpretation: I am I am (multi-words repetitions and prolongation of “am”) Xiao (block) Ming, I I I (single word repetition) live (prolongation and block) in um (interjection) Bei (“b” sound repetition) Jing.

More details on data annotation process can be found in our CHI '24 paper and Interspeech 2024 papers:

  • Qisheng Li, Shaomei Wu. Towards Fair and Inclusive Speech Recognition for Stuttering: Community-led Chinese Stuttered Speech Dataset Creation and Benchmarking. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24). https://doi.org/10.1145/3613905.3650950 [pdf]
  • Rong Gong, Hongfei Xue, Lezhi Wang et al. 2024. AS-70: A Mandarin stuttered speech dataset for automatic speech recognition and stuttering event detection. Interspeech 2024 [pdf]
Downloads last month
0