File size: 3,812 Bytes
1acb0bc
 
 
 
 
 
 
 
 
1b6deab
 
 
 
 
 
 
 
 
 
1da1884
 
1b6deab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86d8bdb
8efdbb9
 
86d8bdb
 
8efdbb9
86d8bdb
1b6deab
 
 
 
 
 
 
1da1884
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: cc
task_categories:
- translation
- text-generation
- text2text-generation
language:
- en
- si
tags:
- translation
- transliteration
- Sinhala
- English
- Singlish
- NLP
- dataset
- low-resource
pretty_name: Sinhala–English–Singlish Translation Dataset
size_categories:
- 10K<n<100K
---

# Sinhala–English–Singlish Translation Dataset

> A parallel corpus of Sinhala sentences, their English translations, and romanized Sinhala (“Singlish”) transliterations.  

---

## 📋 Table of Contents

1. [Dataset Overview](#dataset-overview)  
2. [Installation](#installation)  
3. [Quick Start](#quick-start)  
4. [Dataset Structure](#dataset-structure)  
5. [Usage Examples](#usage-examples)  
6. [Citation](#citation)  
7. [License](#license)  
8. [Credits](#credits)  

---

## Dataset Overview

- **Description**: 34,500 aligned triplets of  
  - Sinhala (native script)  
  - English (human translation)  
  - Singlish (romanized Sinhala)  
- **Source**:  
  - 📊 Kaggle dataset: `programmerrdai/sinhala-english-singlish-translation-dataset`  
  - 🛠️ Collection pipeline: GitHub [Sinenglish-LLM-Data-Collection](https://github.com/Programmer-RD-AI-Archive/Sinenglish-LLM-Data-Collection)  
- **DOI**: 10.57967/hf/5605  
- **Released**: 2025 (Revision `c6560ff`)  
- **License**: MIT  

---

## Installation

```bash
pip install datasets
````

---

## Quick Start

```python
from datasets import load_dataset

ds = load_dataset(
    "Programmer-RD-AI/sinhala-english-singlish-translation", 
    split="train"
)
print(ds[0])
# {
#   "sinhala": "මෙය මගේ ප්‍රධාන අයිතියයි",
#   "english": "This is my headright.",
#   "singlish": "meya mage pradhana ayithiyayi"
# }
```

---

## Dataset Structure

| Column     | Type     | Description                            |
| ---------- | -------- | -------------------------------------- |
| `sinhala`  | `string` | Original sentence in Sinhala script    |
| `english`  | `string` | Corresponding English translation      |
| `singlish` | `string` | Romanized (“Singlish”) transliteration |

* **Rows**: 34,500
* **Format**: CSV (viewed as Parquet on HF)

---

## Usage Examples

### Load into Pandas

```python
import pandas as pd
from datasets import load_dataset

df = load_dataset(
    "Programmer-RD-AI/sinhala-english-singlish-translation", 
    split="train"
).to_pandas()

print(df.head())
```

### Fine-tuning a Translation Model

```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Trainer, TrainingArguments

# 1. Tokenizer & model
tokenizer = AutoTokenizer.from_pretrained("t5-small")
model     = AutoModelForSeq2SeqLM.from_pretrained("t5-small")

# 2. Preprocess
def preprocess(ex):
    inputs  = "translate Sinhala to English: " + ex["sinhala"]
    targets = ex["english"]
    tokenized = tokenizer(inputs, text_target=targets, truncation=True)
    return tokenized

train_dataset = ds.map(preprocess, remove_columns=ds.column_names)
  
# 3. Training
args = TrainingArguments(
    output_dir="outputs",
    num_train_epochs=3,
    per_device_train_batch_size=16,
)
trainer = Trainer(
    model=model,
    args=args,
    train_dataset=train_dataset,
    tokenizer=tokenizer
)
trainer.train()
```

---

## Citation

```bibtex
@misc{ranuga_disansa_gamage_2025,
	author       = { Ranuga Disansa Gamage and Sasvidu Abesinghe and Sheneli Fernando and Thulana Vithanage },
	title        = { sinhala-english-singlish-translation (Revision b6bde25) },
	year         = 2025,
	url          = { https://huggingface.co/datasets/Programmer-RD-AI/sinhala-english-singlish-translation },
	doi          = { 10.57967/hf/5626 },
	publisher    = { Hugging Face }
}
```

---

## License

This dataset is released under the **CC License**. See the [LICENSE](LICENSE) file for details.