Programmer-RD-AI commited on
Commit
1b6deab
·
verified ·
1 Parent(s): 1acb0bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -1
README.md CHANGED
@@ -7,4 +7,159 @@ task_categories:
7
  language:
8
  - en
9
  - si
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  language:
8
  - en
9
  - si
10
+ tags:
11
+ - translation
12
+ - transliteration
13
+ - Sinhala
14
+ - English
15
+ - Singlish
16
+ - NLP
17
+ - dataset
18
+ - low-resource
19
+ pretty_name: Sinhala–English–Singlish Translation Dataset
20
+ ---
21
+
22
+ # Sinhala–English–Singlish Translation Dataset
23
+
24
+ > A parallel corpus of Sinhala sentences, their English translations, and romanized Sinhala (“Singlish”) transliterations.
25
+
26
+ ---
27
+
28
+ ## 📋 Table of Contents
29
+
30
+ 1. [Dataset Overview](#dataset-overview)
31
+ 2. [Installation](#installation)
32
+ 3. [Quick Start](#quick-start)
33
+ 4. [Dataset Structure](#dataset-structure)
34
+ 5. [Usage Examples](#usage-examples)
35
+ 6. [Citation](#citation)
36
+ 7. [License](#license)
37
+ 8. [Credits](#credits)
38
+
39
+ ---
40
+
41
+ ## Dataset Overview
42
+
43
+ - **Description**: 34,500 aligned triplets of
44
+ - Sinhala (native script)
45
+ - English (human translation)
46
+ - Singlish (romanized Sinhala)
47
+ - **Source**:
48
+ - 📊 Kaggle dataset: `programmerrdai/sinhala-english-singlish-translation-dataset`
49
+ - 🛠️ Collection pipeline: GitHub [Sinenglish-LLM-Data-Collection](https://github.com/Programmer-RD-AI-Archive/Sinenglish-LLM-Data-Collection)
50
+ - **DOI**: 10.57967/hf/5605
51
+ - **Released**: 2025 (Revision `c6560ff`)
52
+ - **License**: MIT
53
+
54
+ ---
55
+
56
+ ## Installation
57
+
58
+ ```bash
59
+ pip install datasets
60
+ ````
61
+
62
+ ---
63
+
64
+ ## Quick Start
65
+
66
+ ```python
67
+ from datasets import load_dataset
68
+
69
+ ds = load_dataset(
70
+ "Programmer-RD-AI/sinhala-english-singlish-translation",
71
+ split="train"
72
+ )
73
+ print(ds[0])
74
+ # {
75
+ # "sinhala": "මෙය මගේ ප්‍රධාන අයිතියයි",
76
+ # "english": "This is my headright.",
77
+ # "singlish": "meya mage pradhana ayithiyayi"
78
+ # }
79
+ ```
80
+
81
+ ---
82
+
83
+ ## Dataset Structure
84
+
85
+ | Column | Type | Description |
86
+ | ---------- | -------- | -------------------------------------- |
87
+ | `sinhala` | `string` | Original sentence in Sinhala script |
88
+ | `english` | `string` | Corresponding English translation |
89
+ | `singlish` | `string` | Romanized (“Singlish”) transliteration |
90
+
91
+ * **Rows**: 34,500
92
+ * **Format**: CSV (viewed as Parquet on HF)
93
+
94
+ ---
95
+
96
+ ## Usage Examples
97
+
98
+ ### Load into Pandas
99
+
100
+ ```python
101
+ import pandas as pd
102
+ from datasets import load_dataset
103
+
104
+ df = load_dataset(
105
+ "Programmer-RD-AI/sinhala-english-singlish-translation",
106
+ split="train"
107
+ ).to_pandas()
108
+
109
+ print(df.head())
110
+ ```
111
+
112
+ ### Fine-tuning a Translation Model
113
+
114
+ ```python
115
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Trainer, TrainingArguments
116
+
117
+ # 1. Tokenizer & model
118
+ tokenizer = AutoTokenizer.from_pretrained("t5-small")
119
+ model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
120
+
121
+ # 2. Preprocess
122
+ def preprocess(ex):
123
+ inputs = "translate Sinhala to English: " + ex["sinhala"]
124
+ targets = ex["english"]
125
+ tokenized = tokenizer(inputs, text_target=targets, truncation=True)
126
+ return tokenized
127
+
128
+ train_dataset = ds.map(preprocess, remove_columns=ds.column_names)
129
+
130
+ # 3. Training
131
+ args = TrainingArguments(
132
+ output_dir="outputs",
133
+ num_train_epochs=3,
134
+ per_device_train_batch_size=16,
135
+ )
136
+ trainer = Trainer(
137
+ model=model,
138
+ args=args,
139
+ train_dataset=train_dataset,
140
+ tokenizer=tokenizer
141
+ )
142
+ trainer.train()
143
+ ```
144
+
145
+ ---
146
+
147
+ ## Citation
148
+
149
+ ```bibtex
150
+ @misc{programmer-rd-ai_2025,
151
+ author = {Programmer-RD-AI},
152
+ title = {sinhala-english-singlish-translation (Revision c6560ff)},
153
+ year = {2025},
154
+ url = {https://huggingface.co/datasets/Programmer-RD-AI/sinhala-english-singlish-translation},
155
+ doi = {10.57967/hf/5605},
156
+ publisher = {Hugging Face}
157
+ }
158
+ ```
159
+
160
+ ---
161
+
162
+ ## License
163
+
164
+ This dataset is released under the **CC License**. See the [LICENSE](LICENSE) file for details.
165
+