Programmer-RD-AI's picture
Update README.md
8efdbb9 verified
|
raw
history blame
3.81 kB
---
license: cc
task_categories:
- translation
- text-generation
- text2text-generation
language:
- en
- si
tags:
- translation
- transliteration
- Sinhala
- English
- Singlish
- NLP
- dataset
- low-resource
pretty_name: Sinhala–English–Singlish Translation Dataset
size_categories:
- 10K<n<100K
---
# Sinhala–English–Singlish Translation Dataset
> A parallel corpus of Sinhala sentences, their English translations, and romanized Sinhala (“Singlish”) transliterations.
---
## 📋 Table of Contents
1. [Dataset Overview](#dataset-overview)
2. [Installation](#installation)
3. [Quick Start](#quick-start)
4. [Dataset Structure](#dataset-structure)
5. [Usage Examples](#usage-examples)
6. [Citation](#citation)
7. [License](#license)
8. [Credits](#credits)
---
## Dataset Overview
- **Description**: 34,500 aligned triplets of
- Sinhala (native script)
- English (human translation)
- Singlish (romanized Sinhala)
- **Source**:
- 📊 Kaggle dataset: `programmerrdai/sinhala-english-singlish-translation-dataset`
- 🛠️ Collection pipeline: GitHub [Sinenglish-LLM-Data-Collection](https://github.com/Programmer-RD-AI-Archive/Sinenglish-LLM-Data-Collection)
- **DOI**: 10.57967/hf/5605
- **Released**: 2025 (Revision `c6560ff`)
- **License**: MIT
---
## Installation
```bash
pip install datasets
````
---
## Quick Start
```python
from datasets import load_dataset
ds = load_dataset(
"Programmer-RD-AI/sinhala-english-singlish-translation",
split="train"
)
print(ds[0])
# {
# "sinhala": "මෙය මගේ ප්‍රධාන අයිතියයි",
# "english": "This is my headright.",
# "singlish": "meya mage pradhana ayithiyayi"
# }
```
---
## Dataset Structure
| Column | Type | Description |
| ---------- | -------- | -------------------------------------- |
| `sinhala` | `string` | Original sentence in Sinhala script |
| `english` | `string` | Corresponding English translation |
| `singlish` | `string` | Romanized (“Singlish”) transliteration |
* **Rows**: 34,500
* **Format**: CSV (viewed as Parquet on HF)
---
## Usage Examples
### Load into Pandas
```python
import pandas as pd
from datasets import load_dataset
df = load_dataset(
"Programmer-RD-AI/sinhala-english-singlish-translation",
split="train"
).to_pandas()
print(df.head())
```
### Fine-tuning a Translation Model
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Trainer, TrainingArguments
# 1. Tokenizer & model
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
# 2. Preprocess
def preprocess(ex):
inputs = "translate Sinhala to English: " + ex["sinhala"]
targets = ex["english"]
tokenized = tokenizer(inputs, text_target=targets, truncation=True)
return tokenized
train_dataset = ds.map(preprocess, remove_columns=ds.column_names)
# 3. Training
args = TrainingArguments(
output_dir="outputs",
num_train_epochs=3,
per_device_train_batch_size=16,
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_dataset,
tokenizer=tokenizer
)
trainer.train()
```
---
## Citation
```bibtex
@misc{ranuga_disansa_gamage_2025,
author = { Ranuga Disansa Gamage and Sasvidu Abesinghe and Sheneli Fernando and Thulana Vithanage },
title = { sinhala-english-singlish-translation (Revision b6bde25) },
year = 2025,
url = { https://huggingface.co/datasets/Programmer-RD-AI/sinhala-english-singlish-translation },
doi = { 10.57967/hf/5626 },
publisher = { Hugging Face }
}
```
---
## License
This dataset is released under the **CC License**. See the [LICENSE](LICENSE) file for details.