Pralekha / README.md
sanjay73's picture
Update README.md
db1c443 verified
metadata
language:
  - bn
  - en
  - gu
  - hi
  - kn
  - ml
  - mr
  - or
  - pa
  - ta
  - te
  - ur
license: cc-by-4.0
size_categories:
  - 1M<n<10M
pretty_name: Pralekha
dataset_info:
  features:
    - name: n_id
      dtype: string
    - name: doc_id
      dtype: string
    - name: lang
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: aligned
      num_bytes: 10274361211
      num_examples: 1566404
    - name: unaligned
      num_bytes: 4466506637
      num_examples: 783197
  download_size: 5812005886
  dataset_size: 14740867848
configs:
  - config_name: default
    data_files:
      - split: aligned
        path: data/aligned-*
      - split: unaligned
        path: data/unaligned-*
tags:
  - data-mining
  - document-alignment
  - parallel-corpus

Pralekha: An Indic Document Alignment Evaluation Benchmark

PRALEKHA is a large-scale benchmark for evaluating document-level alignment techniques. It includes 2M+ documents, covering 11 Indic languages and English, with a balanced mix of aligned and unaligned pairs.


Dataset Description

PRALEKHA covers 12 languages—Bengali (ben), Gujarati (guj), Hindi (hin), Kannada (kan), Malayalam (mal), Marathi (mar), Odia (ori), Punjabi (pan), Tamil (tam), Telugu (tel), Urdu (urd), and English (eng). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: news bulletins and podcast scripts, offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality.

The dataset has a 1:2 ratio of aligned to unaligned document pairs, making it ideal for benchmarking cross-lingual document alignment techniques.

Data Fields

Each data sample includes:

  • n_id: Unique identifier for aligned document pairs.
  • doc_id: Unique identifier for individual documents.
  • lang: Language of the document (ISO-3 code).
  • text: The textual content of the document.

Data Sources

  1. News Bulletins: Data was custom-scraped from the Indian Press Information Bureau (PIB) website. Documents were aligned by matching bulletin IDs, which interlink bulletins across languages.
  2. Podcast Scripts: Data was sourced from Mann Ki Baat, a radio program hosted by the Indian Prime Minister. This program, originally spoken in Hindi, was manually transcribed and translated into various Indian languages.

Dataset Size Statistics

Split Number of Documents Size (bytes)
Aligned 1,566,404 10,274,361,211
Unaligned 783,197 4,466,506,637
Total 2,349,601 14,740,867,848

Language-wise Statistics

Language (ISO-3) Aligned Documents Unaligned Documents Total Documents
Bengali (ben) 95,813 47,906 143,719
English (eng) 298,111 149,055 447,166
Gujarati (guj) 67,847 33,923 101,770
Hindi (hin) 204,809 102,404 307,213
Kannada (kan) 61,998 30,999 92,997
Malayalam (mal) 67,760 33,880 101,640
Marathi (mar) 135,301 67,650 202,951
Odia (ori) 46,167 23,083 69,250
Punjabi (pan) 108,459 54,229 162,688
Tamil (tam) 149,637 74,818 224,455
Telugu (tel) 110,077 55,038 165,115
Urdu (urd) 220,425 110,212 330,637

Usage

You can use the following commands to download and explore the dataset:

Downloading the Entire Dataset

from datasets import load_dataset

dataset = load_dataset("ai4bharat/pralekha")

Downloading a Specific Split (aligned or unaligned)

from datasets import load_dataset

dataset = load_dataset("ai4bharat/pralekha", split="<split_name>")
# For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned")

Downloading a Specific Language from a Split

from datasets import load_dataset

dataset = load_dataset("ai4bharat/pralekha", split="<split_name>/<lang_code>")
# For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned/ben")

License

This dataset is released under the CC BY 4.0 license.


Contact

For any questions or feedback, please contact:

Please get in touch with us for any copyright concerns.