Programmer-RD-AI's picture
Update README.md
8efdbb9 verified
metadata
license: cc
task_categories:
  - translation
  - text-generation
  - text2text-generation
language:
  - en
  - si
tags:
  - translation
  - transliteration
  - Sinhala
  - English
  - Singlish
  - NLP
  - dataset
  - low-resource
pretty_name: Sinhala–English–Singlish Translation Dataset
size_categories:
  - 10K<n<100K

Sinhala–English–Singlish Translation Dataset

A parallel corpus of Sinhala sentences, their English translations, and romanized Sinhala (“Singlish”) transliterations.


📋 Table of Contents

  1. Dataset Overview
  2. Installation
  3. Quick Start
  4. Dataset Structure
  5. Usage Examples
  6. Citation
  7. License
  8. Credits

Dataset Overview

  • Description: 34,500 aligned triplets of
    • Sinhala (native script)
    • English (human translation)
    • Singlish (romanized Sinhala)
  • Source:
  • DOI: 10.57967/hf/5605
  • Released: 2025 (Revision c6560ff)
  • License: MIT

Installation

pip install datasets

Quick Start

from datasets import load_dataset

ds = load_dataset(
    "Programmer-RD-AI/sinhala-english-singlish-translation", 
    split="train"
)
print(ds[0])
# {
#   "sinhala": "මෙය මගේ ප්‍රධාන අයිතියයි",
#   "english": "This is my headright.",
#   "singlish": "meya mage pradhana ayithiyayi"
# }

Dataset Structure

Column Type Description
sinhala string Original sentence in Sinhala script
english string Corresponding English translation
singlish string Romanized (“Singlish”) transliteration
  • Rows: 34,500
  • Format: CSV (viewed as Parquet on HF)

Usage Examples

Load into Pandas

import pandas as pd
from datasets import load_dataset

df = load_dataset(
    "Programmer-RD-AI/sinhala-english-singlish-translation", 
    split="train"
).to_pandas()

print(df.head())

Fine-tuning a Translation Model

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Trainer, TrainingArguments

# 1. Tokenizer & model
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model     = AutoModelForSeq2SeqLM.from_pretrained("t5-small")

# 2. Preprocess
def preprocess(ex):
    inputs  = "translate Sinhala to English: " + ex["sinhala"]
    targets = ex["english"]
    tokenized = tokenizer(inputs, text_target=targets, truncation=True)
    return tokenized

train_dataset = ds.map(preprocess, remove_columns=ds.column_names)
  
# 3. Training
args = TrainingArguments(
    output_dir="outputs",
    num_train_epochs=3,
    per_device_train_batch_size=16,
)
trainer = Trainer(
    model=model,
    args=args,
    train_dataset=train_dataset,
    tokenizer=tokenizer
)
trainer.train()

Citation

@misc{ranuga_disansa_gamage_2025,
    author       = { Ranuga Disansa Gamage and Sasvidu Abesinghe and Sheneli Fernando and Thulana Vithanage },
    title        = { sinhala-english-singlish-translation (Revision b6bde25) },
    year         = 2025,
    url          = { https://huggingface.co/datasets/Programmer-RD-AI/sinhala-english-singlish-translation },
    doi          = { 10.57967/hf/5626 },
    publisher    = { Hugging Face }
}

License

This dataset is released under the CC License. See the LICENSE file for details.