Mana-TTS / README.md
MahtaFetrat's picture
Update README.md
6cccb02 verified
metadata
license: cc0-1.0
tags:
  - text-to-speech
  - tts
  - speech-synthesis
  - persian
  - data-collection
  - data-preprocessing
  - speech-processing
  - forced-alignment
  - speech-dataset
  - speech-corpus
  - dataset-preparation
  - persian-speech
  - tts-dataset
  - text-to-speech-dataset
  - mana-tts
  - manatts
  - speech-data-collection

ManaTTS Persian: a recipe for creating TTS datasets for lower resource languages

Hugging Face

Mana-TTS is a comprehensive Persian Text-to-Speech (TTS) dataset, featuring 102 hours of high-quality single-speaker audio, specifically designed for speech synthesis and related tasks. The dataset has been carefully collected, processed, and annotated to ensure high-quality data for training TTS models. For details on data processing pipeline and statistics, please refer to the paper in the Citation section.

Pretrained Models

A Tacotron2 model has been trained on this dataset and is available here.

Acknowledgement

The raw audio and text files have been collected from the archive of Nasl-e-Mana magazine devoted to the blind. We thank the Nasl-e-Mana magazine for their invaluable work and for being so generous with the published dataset license. We also extend our gratitude to the Iran Blind Non-governmental Organization for their support and guidance regarding the need for open access initiatives in this domain.

Data Columns

Each Parquet file contains the following columns:

  • file name (string): The unique identifier of the audio file.
  • transcript (string): The ground-truth transcript corresponding to the audio.
  • duration (float64): Duration of the audio file in seconds.
  • match quality (string): Either "HIGH" for CER < 0.05 or "MIDDLE" for 0.05 < CER < 0.2 between actual and hypothesis transcript.
  • hypothesis (string): The best transcript generated by ASR as hypothesis to find the matching ground-truth transcript.
  • CER (float64): The Character Error Rate (CER) of the ground-truth and hypothesis transcripts.
  • search type (int64): Either 1 indicating the GT transcripts is result of Interval Search or 2 if a result of Gapped Search (refer to paper for more details).
  • ASRs (string): The Automatic Speech Recognition (ASR) systems used in order to find a satisfying matching transcript.
  • audio (sequence): The actual audio data.
  • samplerate (float64): The sample rate of the audio.

Usage

Full Dataset

from datasets import load_dataset
dataset = load_dataset("MahtaFetrat/Mana-TTS", split='train')

Partial Download

To download only specific parts (e.g., for Colab/limited storage):

# Replace XX with part number (01, 02, etc.)
wget https://huggingface.co/datasets/MahtaFetrat/Mana-TTS/resolve/main/dataset/dataset_part_XX.parquet

Streaming (avoids full downloads):

dataset = load_dataset("MahtaFetrat/Mana-TTS", streaming=True)
for sample in dataset["train"].take(100):  # Process batches incrementally
    print(sample)

Citation

If you use Mana-TTS in your research or projects, please cite the following paper:

@inproceedings{qharabagh-etal-2025-manatts,
    title = "{M}ana{TTS} {P}ersian: a recipe for creating {TTS} datasets for lower resource languages",
    author = "Qharabagh, Mahta Fetrat  and Dehghanian, Zahra  and Rabiee, Hamid R.",
    booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)",
    month = apr,
    year = "2025",
    address = "Albuquerque, New Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.naacl-long.464/",
    pages = "9177--9206",
}

License

This dataset is available under the cc0-1.0. However, the dataset should not be utilized for replicating or imitating the speaker's voice for malicious purposes or unethical activities, including voice cloning for malicious intent.

Collaboration and Community Impact

We encourage researchers, developers, and the broader community to utilize the resources provided in this project, particularly in the development of high-quality screen readers and other assistive technologies to support the Iranian blind community. By fostering open-source collaboration, we aim to drive innovation and improve accessibility for all.