Datasets:
tags:
- audio
- speech
- arabic
- mozilla-common-voice
- sesawe
- conversational-speech
language:
- ar
license: cc0-1.0
pretty_name: Curated Arabic Speech Dataset for Seasme CSM (from MCV17)
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: speaker_id
dtype: int64
splits:
- name: train
num_bytes: 6000782270.368
num_examples: 44328
- name: validation
num_bytes: 661102294.6
num_examples: 4925
download_size: 6538204971
dataset_size: 6661884564.968
Curated Arabic Speech Dataset for Seasme (from MCV17)
Dataset Description
This dataset is a curated and preprocessed version of the Arabic (ar) subset from Mozilla Common Voice (MCV) 17.0. It has been specifically prepared for fine-tuning conversational speech models, with a primary focus on the Seasme-CSM model architecture. The dataset consists of audio clips in WAV format (24kHz, mono) and their corresponding transcripts, along with integer speaker IDs.
The original MCV data was subjected to an extensive cleaning, normalization, and filtering pipeline to improve data quality and ensure suitability for the target model.
Language: Arabic (ar) Source: Mozilla Common Voice 17.0 Approximate Total Duration (after filtering): 57.86 hours Approximate Number of Utterances (after filtering): 49,253
Dataset Structure
The dataset is provided as JSON Lines (.json) manifest files, one for training and one for validation. Each line in the manifest represents an audio-text-speaker triplet.
Data Fields
Each entry in the manifest files has the following structure:
text
(string): The Arabic transcription corresponding to the audio. This is derived from the original "sentence" field and has undergone cleaning and normalization. (Crucially, note if this text is ASR-ready (no punctuation/diacritics) or TTS-ready (potentially with punctuation if preserved from "sentence")).path
(string): The relative path to the corresponding audio file (24kHz, mono WAV format). The path is relative to the directory containing the manifest file (e.g.,clips_wav_24k/filename.wav
).speaker
(integer): A unique integer ID assigned to each speaker, derived from the originalclient_id
.
Data Splits
The dataset is split into:
train_manifest.json
: Contains approximately 90% of the data, intended for training.validation_manifest.json
: Contains approximately 10% of the data, intended for validation
The data was thoroughly shuffled before splitting.
Data Instances
An example from a manifest file:
{
"text": "ايا سعد قل للقس من داخل الدير",
"path": "clips_wav_24k/common_voice_ar_30452352.wav",
"speaker": 21
}
Dataset Creation
Curation Rationale
The primary goal of the curation process was to create a high-quality, clean, and consistently formatted dataset suitable for fine-tuning advanced conversational speech models like SESAWE. This involved addressing common issues found in large crowdsourced datasets, such as inconsistent text, problematic audio, and metadata inaccuracies.
Source Data
- Dataset: Mozilla Common Voice (MCV)
- Version: 17.0
- Subset: Arabic (ar)
Preprocessing Steps
The dataset underwent the following preprocessing, cleaning, and filtering stages:
Phase 1: Text Cleaning & Normalization
- Unicode Normalization (NFKC): Standardized character representations.
- Arabic Character Variant Mapping: Mapped non-standard variants (e.g., Persian
ک
) to Arabic equivalents; removed others. - Ligature Decomposition: Decomposed common Arabic ligatures (e.g.,
ﻻ
toلا
). - Standard Arabic Normalization (
camel-tools
): Normalized Alef, Alef Maksura (ى
toي
), and Teh Marbuta (ة
toه
). - Numeral Processing: Transliterated Eastern Arabic numerals to Western, then substituted Western numerals with Arabic word spellings.
- Diacritic Removal: Removed all Arabic diacritics.
- Comprehensive Character Removal: Removed punctuation, symbols, Latin characters, Quranic marks, and Tatweel. Ensured standalone Hamza (
ء
) was kept. - Whitespace Cleanup: Ensured single spaces between words and removed leading/trailing whitespace.
Phase 2: Audio & Text Property Filtering, and Metadata Adjustments
9. Audio Path Correction: Ensured audio_filepath
in the manifest correctly pointed to local MP3 files.
10. Duration Filtering: Removed utterances shorter than 1.0 second and longer than 20.0 seconds.
11. Text Length Filtering: Removed utterances where cleaned text length was less than 2 characters.
12. Character/Word Rate Filtering: Filtered utterances based on word rate (0.2-3.0 words/sec) and character rate (0.65-15.5 chars/sec).
13. Metadata Column Filtering: Removed manifest columns with >99% null values (e.g., variant
, accents
).
14. Audio Property Filtering: Removed utterances with frequency bandwidth < 2000 Hz or peak level < -25 dB. (This step was planned after v15 path conversion and before SESAWE formatting in our discussion, ensure it's correctly placed if it happened before SESAWE formatting)
Phase 3: Vocabulary-Based Filtering & Final Preparation
15. Rare Word Utterance Removal: Removed utterances containing any of the 50 least frequent words.
16. Deduplication: Removed duplicate entries based on unique sentence_id
(keeping the first occurrence).
17. Audio Format Conversion: Converted MP3 audio files to WAV format (24kHz, mono).
18. Relative Path Conversion: Changed audio_filepath
to be relative to the manifest file's location (e.g., clips_wav_24k/filename.wav
).
19. Speaker ID Mapping: Mapped unique client_id
strings to sequential integer speaker IDs.
20. SESAWE Formatting: Reformatted manifest entries to the required {"text": ..., "path": ..., "speaker": ...}
structure.
21. Shuffling & Splitting: Thoroughly shuffled the dataset and split into training and validation sets.
Considerations for Using the Data
- Text for SESAWE: The
text
field is derived from the original "sentence" field from MCV. Depending on the Seasme's model's requirements (especially if it's a TTS model that benefits from natural prosody), this text might be too normalized (lacking punctuation, diacritics, original casing). If SESAWE expects more naturalistic text, users might need to adjust the text processing pipeline or use an earlier version of the "sentence" field before heavy ASR-focused normalization. - Audio Format: Audio is in 24kHz, 16-bit, mono WAV format.
- Speaker IDs: Speaker IDs are integers mapped from original
client_id
s and are consistent within this dataset. - Potential Biases: As a derivative of Common Voice, this dataset may inherit demographic or dialectal biases present in the source.
- No Test Set: This current preparation only includes
train
andvalidation
splits. A separate, held-out test set would be needed for final model evaluation.
Licensing Information
This dataset is a derivative of the Mozilla Common Voice 17.0 Arabic dataset, which is released under the Creative Commons CC0 license (public domain dedication). Any new contributions or substantial transformations in this version should also consider this licensing. Please refer to the original MCV license for full details.
Citation
If you use this dataset, please cite the original Mozilla Common Voice dataset:
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Language Resources and Evaluation Conference (LREC 2020)},
pages = {4211--4215},
year = {2020}
}
If this curated version is hosted and has a specific DOI or citation mechanism, include that here.
Dataset Curator
Created by M. Adel
Contact