system HF staff commited on
Commit
e0f2365
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - bn
8
+ - en
9
+ - fil
10
+ - hi
11
+ - id
12
+ - ja
13
+ - km
14
+ - lo
15
+ - ms
16
+ - my
17
+ - th
18
+ - vi
19
+ - zh
20
+ licenses:
21
+ - cc-by-4-0
22
+ multilinguality:
23
+ - multilingual
24
+ - translation
25
+ size_categories:
26
+ - n<1K
27
+ source_datasets:
28
+ - original
29
+ task_categories:
30
+ - conditional-text-generation
31
+ - structure-prediction
32
+ task_ids:
33
+ - machine-translation
34
+ - parsing
35
+ ---
36
+
37
+ # Dataset Card for Asian Language Treebank (ALT)
38
+
39
+ ## Table of Contents
40
+ - [Dataset Description](#dataset-description)
41
+ - [Dataset Summary](#dataset-summary)
42
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
43
+ - [Languages](#languages)
44
+ - [Dataset Structure](#dataset-structure)
45
+ - [Data Instances](#data-instances)
46
+ - [Data Fields](#data-instances)
47
+ - [Data Splits](#data-instances)
48
+ - [Dataset Creation](#dataset-creation)
49
+ - [Curation Rationale](#curation-rationale)
50
+ - [Source Data](#source-data)
51
+ - [Annotations](#annotations)
52
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
53
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
54
+ - [Social Impact of Dataset](#social-impact-of-dataset)
55
+ - [Discussion of Biases](#discussion-of-biases)
56
+ - [Other Known Limitations](#other-known-limitations)
57
+ - [Additional Information](#additional-information)
58
+ - [Dataset Curators](#dataset-curators)
59
+ - [Licensing Information](#licensing-information)
60
+ - [Citation Information](#citation-information)
61
+
62
+ ## Dataset Description
63
+
64
+ - **Homepage:** https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
65
+ - **Leaderboard:**
66
+ - **Paper:** [Introduction of the Asian Language Treebank]https://ieeexplore.ieee.org/abstract/document/7918974)
67
+ - **Point of Contact:** [ALT info]([email protected])
68
+
69
+ ### Dataset Summary
70
+ The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under [ASEAN IVO](https://www.nict.go.jp/en/asean_ivo/index.html) as described in this Web page.
71
+
72
+ The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.
73
+
74
+ ### Supported Tasks and Leaderboards
75
+
76
+ Machine Translation, Dependency Parsing
77
+
78
+
79
+ ### Languages
80
+
81
+ It supports 13 languages:
82
+ * Bengali
83
+ * English
84
+ * Filipino
85
+ * Hindi
86
+ * Bahasa Indonesia
87
+ * Japanese
88
+ * Khmer
89
+ * Lao
90
+ * Malay
91
+ * Myanmar (Burmese)
92
+ * Thai
93
+ * Vietnamese
94
+ * Chinese (Simplified Chinese).
95
+
96
+ ## Dataset Structure
97
+
98
+ ### Data Instances
99
+
100
+ #### ALT Parallel Corpus
101
+ ```
102
+ {
103
+ "SNT.URLID": "80188",
104
+ "SNT.URLID.SNTID": "1",
105
+ "url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
106
+ "bg": "[translated sentence]",
107
+ "en": "[translated sentence]",
108
+ "en_tok": "[translated sentence]",
109
+ "fil": "[translated sentence]",
110
+ "hi": "[translated sentence]",
111
+ "id": "[translated sentence]",
112
+ "ja": "[translated sentence]",
113
+ "khm": "[translated sentence]",
114
+ "lo": "[translated sentence]",
115
+ "ms": "[translated sentence]",
116
+ "my": "[translated sentence]",
117
+ "th": "[translated sentence]",
118
+ "vi": "[translated sentence]",
119
+ "zh": "[translated sentence]"
120
+ }
121
+ ```
122
+
123
+ #### ALT Treebank
124
+ ```
125
+ {
126
+ "SNT.URLID": "80188",
127
+ "SNT.URLID.SNTID": "1",
128
+ "url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal",
129
+ "status": "draft/reviewed",
130
+ "value": "(S (S (BASENP (NNP Italy)) (VP (VBP have) (VP (VP (VP (VBN defeated) (BASENP (NNP Portugal))) (ADVP (RB 31-5))) (PP (IN in) (NP (BASENP (NNP Pool) (NNP C)) (PP (IN of) (NP (BASENP (DT the) (NN 2007) (NNP Rugby) (NNP World) (NNP Cup)) (PP (IN at) (NP (BASENP (NNP Parc) (FW des) (NNP Princes)) (COMMA ,) (BASENP (NNP Paris) (COMMA ,) (NNP France))))))))))) (PERIOD .))"
131
+ }
132
+ ```
133
+
134
+ #### ALT Myanmar transliteration
135
+ ```
136
+ {
137
+ "en": "CASINO",
138
+ "my": [
139
+ "ကက်စီနို",
140
+ "ကစီနို",
141
+ "ကာစီနို",
142
+ "ကာဆီနို"
143
+ ]
144
+ }
145
+ ```
146
+
147
+ ### Data Fields
148
+
149
+
150
+ #### ALT Parallel Corpus
151
+ - SNT.URLID: URL link to the source article listed in [URL.txt](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206/URL.txt)
152
+ - SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from `SNT.URLID`
153
+
154
+ and bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language
155
+
156
+ #### ALT Treebank
157
+ - status: it indicates how a sentence is annotated; `draft` sentences are annotated by one annotater and `reviewed` sentences are annotated by two annotater
158
+
159
+ The annotatation is different from language to language, please see [their guildlines](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/) for more detail.
160
+
161
+ ### Data Splits
162
+
163
+ | | train | valid | test |
164
+ |-----------|-------|-------|-------|
165
+ | # articles | 1698 | 98 | 97 |
166
+ | # sentences | 18088 | 1000 | 1018 |
167
+
168
+
169
+ ## Dataset Creation
170
+
171
+ ### Curation Rationale
172
+
173
+ The ALT project was initiated by the [National Institute of Information and Communications Technology, Japan](https://www.nict.go.jp/en/) (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.
174
+
175
+
176
+ ### Source Data
177
+
178
+ #### Initial Data Collection and Normalization
179
+
180
+ [More Information Needed]
181
+
182
+ #### Who are the source language producers?
183
+
184
+ The dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from
185
+ * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
186
+ * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
187
+ * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
188
+ * the Institute for Infocomm Research, Singapore (I2R) for Malay
189
+ * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
190
+ * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
191
+
192
+ ### Annotations
193
+
194
+ #### Annotation process
195
+
196
+ [More Information Needed]
197
+
198
+ #### Who are the annotators?
199
+
200
+ [More Information Needed]
201
+
202
+ ### Personal and Sensitive Information
203
+
204
+ [More Information Needed]
205
+
206
+ ## Considerations for Using the Data
207
+
208
+ ### Social Impact of Dataset
209
+
210
+ [More Information Needed]
211
+
212
+ ### Discussion of Biases
213
+
214
+ [More Information Needed]
215
+
216
+ ### Other Known Limitations
217
+
218
+ [More Information Needed]
219
+
220
+
221
+ ## Additional Information
222
+
223
+ ### Dataset Curators
224
+
225
+ * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English
226
+ * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar
227
+ * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian
228
+ * the Institute for Infocomm Research, Singapore (I2R) for Malay
229
+ * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese
230
+ * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer
231
+
232
+ ### Licensing Information
233
+
234
+ [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
235
+
236
+ ### Citation Information
237
+
238
+ Please cite the following if you make use of the dataset:
239
+
240
+ Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) "Introduction of the Asian Language Treebank" Oriental COCOSDA.
241
+
242
+ BibTeX:
243
+ ```
244
+ @inproceedings{riza2016introduction,
245
+ title={Introduction of the asian language treebank},
246
+ author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},
247
+ booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
248
+ pages={1--6},
249
+ year={2016},
250
+ organization={IEEE}
251
+ }
252
+ ```
alt.py ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # -*- coding: utf-8 -*-
3
+ """Asian Language Treebank (ALT) Project"""
4
+
5
+ from __future__ import absolute_import, division, print_function
6
+
7
+ import os
8
+
9
+ import datasets
10
+
11
+
12
+ _CITATION = """\
13
+ @inproceedings{riza2016introduction,
14
+ title={Introduction of the asian language treebank},
15
+ author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},
16
+ booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},
17
+ pages={1--6},
18
+ year={2016},
19
+ organization={IEEE}
20
+ }
21
+ """
22
+
23
+ _HOMEPAGE = "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/"
24
+
25
+ _DESCRIPTION = """\
26
+ The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).
27
+ """
28
+
29
+ _URLs = {
30
+ "alt": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206.zip",
31
+ "alt-en": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip",
32
+ "alt-jp": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/Japanese-ALT-20170330.zip",
33
+ "alt-my": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-alt-190530.zip",
34
+ "alt-my-transliteration": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-en-transliteration.zip",
35
+ "alt-my-west-transliteration": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/western-myanmar-transliteration.zip",
36
+ "alt-km": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/km-nova-181101.zip",
37
+ }
38
+
39
+ _SPLIT = {
40
+ "train": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt",
41
+ "dev": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt",
42
+ "test": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt",
43
+ }
44
+
45
+ _WIKI_URL = "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206/URL.txt"
46
+
47
+
48
+ class AltParallelConfig(datasets.BuilderConfig):
49
+ """BuilderConfig for ALT."""
50
+
51
+ def __init__(self, languages, **kwargs):
52
+ """BuilderConfig for ALT.
53
+
54
+ Args:
55
+ for the `datasets.features.text.TextEncoder` used for the features feature.
56
+
57
+ languages: languages that will be used for translation. it should be one of the
58
+ **kwargs: keyword arguments forwarded to super.
59
+ """
60
+
61
+ name = "alt-parallel"
62
+
63
+ description = "ALT Parallel Corpus"
64
+ super(AltParallelConfig, self).__init__(
65
+ name=name,
66
+ description=description,
67
+ version=datasets.Version("1.0.0", ""),
68
+ **kwargs,
69
+ )
70
+
71
+ available_langs = set(
72
+ ["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"]
73
+ )
74
+ for l in languages:
75
+ assert l in available_langs
76
+
77
+ self.languages = languages
78
+
79
+
80
+ class Alt(datasets.GeneratorBasedBuilder):
81
+ """Asian Language Treebank (ALT) Project"""
82
+
83
+ BUILDER_CONFIGS = [
84
+ AltParallelConfig(
85
+ languages=["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"]
86
+ ),
87
+ datasets.BuilderConfig(name="alt-en", version=datasets.Version("1.0.0"), description="English ALT"),
88
+ datasets.BuilderConfig(name="alt-jp", version=datasets.Version("1.0.0"), description="Japanese ALT"),
89
+ datasets.BuilderConfig(name="alt-my", version=datasets.Version("1.0.0"), description="Myanmar ALT"),
90
+ datasets.BuilderConfig(name="alt-km", version=datasets.Version("1.0.0"), description="Khmer ALT"),
91
+ datasets.BuilderConfig(
92
+ name="alt-my-transliteration",
93
+ version=datasets.Version("1.0.0"),
94
+ description="Myanmar-English Transliteration Dataset",
95
+ ),
96
+ datasets.BuilderConfig(
97
+ name="alt-my-west-transliteration",
98
+ version=datasets.Version("1.0.0"),
99
+ description="Latin-Myanmar Transliteration Dataset",
100
+ ),
101
+ ]
102
+
103
+ DEFAULT_CONFIG_NAME = "alt-parallel"
104
+
105
+ def _info(self):
106
+ if self.config.name.startswith("alt-parallel"):
107
+ features = datasets.Features(
108
+ {
109
+ "SNT.URLID": datasets.Value("string"),
110
+ "SNT.URLID.SNTID": datasets.Value("string"),
111
+ "url": datasets.Value("string"),
112
+ "translation": datasets.features.Translation(languages=self.config.languages),
113
+ }
114
+ )
115
+ elif self.config.name == "alt-en":
116
+ features = datasets.Features(
117
+ {
118
+ "SNT.URLID": datasets.Value("string"),
119
+ "SNT.URLID.SNTID": datasets.Value("string"),
120
+ "url": datasets.Value("string"),
121
+ "status": datasets.Value("string"),
122
+ "value": datasets.Value("string"),
123
+ }
124
+ )
125
+ elif self.config.name == "alt-jp":
126
+ features = datasets.Features(
127
+ {
128
+ "SNT.URLID": datasets.Value("string"),
129
+ "SNT.URLID.SNTID": datasets.Value("string"),
130
+ "url": datasets.Value("string"),
131
+ "status": datasets.Value("string"),
132
+ "value": datasets.Value("string"),
133
+ "word_alignment": datasets.Value("string"),
134
+ "jp_tokenized": datasets.Value("string"),
135
+ "en_tokenized": datasets.Value("string"),
136
+ }
137
+ )
138
+ elif self.config.name == "alt-my":
139
+ features = datasets.Features(
140
+ {
141
+ "SNT.URLID": datasets.Value("string"),
142
+ "SNT.URLID.SNTID": datasets.Value("string"),
143
+ "url": datasets.Value("string"),
144
+ "value": datasets.Value("string"),
145
+ }
146
+ )
147
+ elif self.config.name == "alt-my-transliteration":
148
+ features = datasets.Features(
149
+ {
150
+ "en": datasets.Value("string"),
151
+ "my": datasets.Sequence(datasets.Value("string")),
152
+ }
153
+ )
154
+ elif self.config.name == "alt-my-west-transliteration":
155
+ features = datasets.Features(
156
+ {
157
+ "en": datasets.Value("string"),
158
+ "my": datasets.Sequence(datasets.Value("string")),
159
+ }
160
+ )
161
+ elif self.config.name == "alt-km":
162
+ features = datasets.Features(
163
+ {
164
+ "SNT.URLID": datasets.Value("string"),
165
+ "SNT.URLID.SNTID": datasets.Value("string"),
166
+ "url": datasets.Value("string"),
167
+ "km_pos_tag": datasets.Value("string"),
168
+ "km_tokenized": datasets.Value("string"),
169
+ }
170
+ )
171
+ else:
172
+ raise
173
+
174
+ return datasets.DatasetInfo(
175
+ description=_DESCRIPTION,
176
+ features=features,
177
+ supervised_keys=None,
178
+ homepage=_HOMEPAGE,
179
+ citation=_CITATION,
180
+ )
181
+
182
+ def _split_generators(self, dl_manager):
183
+ if self.config.name.startswith("alt-parallel"):
184
+ data_path = dl_manager.download_and_extract(_URLs["alt"])
185
+ else:
186
+ data_path = dl_manager.download_and_extract(_URLs[self.config.name])
187
+
188
+ if self.config.name == "alt-my-transliteration" or self.config.name == "alt-my-west-transliteration":
189
+ return [
190
+ datasets.SplitGenerator(
191
+ name=datasets.Split.TRAIN,
192
+ gen_kwargs={"basepath": data_path, "split": None},
193
+ )
194
+ ]
195
+ else:
196
+ data_split = {}
197
+ for k in _SPLIT:
198
+ data_split[k] = dl_manager.download_and_extract(_SPLIT[k])
199
+
200
+ return [
201
+ datasets.SplitGenerator(
202
+ name=datasets.Split.TRAIN,
203
+ gen_kwargs={"basepath": data_path, "split": data_split["train"]},
204
+ ),
205
+ datasets.SplitGenerator(
206
+ name=datasets.Split.VALIDATION,
207
+ gen_kwargs={"basepath": data_path, "split": data_split["dev"]},
208
+ ),
209
+ datasets.SplitGenerator(
210
+ name=datasets.Split.TEST,
211
+ gen_kwargs={"basepath": data_path, "split": data_split["test"]},
212
+ ),
213
+ ]
214
+
215
+ def _generate_examples(self, basepath, split=None):
216
+ allow_urls = {}
217
+ if split is not None:
218
+ with open(split, encoding="utf-8") as fin:
219
+ for line in fin:
220
+ sp = line.strip().split("\t")
221
+ urlid = sp[0].replace("URL.", "")
222
+ allow_urls[urlid] = {"SNT.URLID": urlid, "url": sp[1]}
223
+
224
+ data = {}
225
+ if self.config.name.startswith("alt-parallel"):
226
+ files = self.config.languages
227
+
228
+ template = {
229
+ "SNT.URLID": None,
230
+ "SNT.URLID.SNTID": None,
231
+ "url": None,
232
+ "translation": {},
233
+ }
234
+
235
+ data = {}
236
+ for lang in files:
237
+ file_path = os.path.join(basepath, "ALT-Parallel-Corpus-20191206", f"data_{lang}.txt")
238
+ fin = open(file_path, encoding="utf-8")
239
+ for line in fin:
240
+ line = line.strip()
241
+ sp = line.split("\t")
242
+
243
+ _, urlid, sntid = sp[0].split(".")
244
+ if urlid not in allow_urls:
245
+ continue
246
+
247
+ if sntid not in data:
248
+ data[sntid] = template.copy()
249
+ data[sntid]["SNT.URLID"] = urlid
250
+ data[sntid]["SNT.URLID.SNTID"] = sntid
251
+ data[sntid]["url"] = allow_urls[urlid]["url"]
252
+
253
+ # Note that Japanese and Myanmar texts have empty sentence fields in this release.
254
+ if len(sp) >= 2:
255
+ data[sntid]["translation"][lang] = sp[1]
256
+ fin.close()
257
+
258
+ elif self.config.name == "alt-en":
259
+ data = {}
260
+ for fname in ["English-ALT-Draft.txt", "English-ALT-Reviewed.txt"]:
261
+ file_path = os.path.join(basepath, f"English-ALT-20170107", fname)
262
+ fin = open(file_path, encoding="utf-8")
263
+ for line in fin:
264
+ line = line.strip()
265
+ sp = line.split("\t")
266
+
267
+ _, urlid, sntid = sp[0].split(".")
268
+ if urlid not in allow_urls:
269
+ continue
270
+
271
+ d = {
272
+ "SNT.URLID": urlid,
273
+ "SNT.URLID.SNTID": sntid,
274
+ "url": allow_urls[urlid]["url"],
275
+ "status": None,
276
+ "value": None,
277
+ }
278
+
279
+ d["value"] = sp[1]
280
+ if fname == "English-ALT-Draft.txt":
281
+ d["status"] = "draft"
282
+ else:
283
+ d["status"] = "reviewed"
284
+
285
+ data[sntid] = d
286
+ fin.close()
287
+ elif self.config.name == "alt-jp":
288
+ data = {}
289
+ for fname in ["Japanese-ALT-Draft.txt", "Japanese-ALT-Reviewed.txt"]:
290
+ file_path = os.path.join(basepath, f"Japanese-ALT-20170330", fname)
291
+ fin = open(file_path, encoding="utf-8")
292
+ for line in fin:
293
+ line = line.strip()
294
+ sp = line.split("\t")
295
+ _, urlid, sntid = sp[0].split(".")
296
+ if urlid not in allow_urls:
297
+ continue
298
+
299
+ d = {
300
+ "SNT.URLID": urlid,
301
+ "SNT.URLID.SNTID": sntid,
302
+ "url": allow_urls[urlid]["url"],
303
+ "value": None,
304
+ "status": None,
305
+ "word_alignment": None,
306
+ "en_tokenized": None,
307
+ "jp_tokenized": None,
308
+ }
309
+
310
+ d["value"] = sp[1]
311
+ if fname == "Japanese-ALT-Draft.txt":
312
+ d["status"] = "draft"
313
+ else:
314
+ d["status"] = "reviewed"
315
+ data[sntid] = d
316
+ fin.close()
317
+
318
+ keys = {
319
+ "word_alignment": "word-alignment/data_ja.en-ja",
320
+ "en_tokenized": "word-alignment/data_ja.en-tok",
321
+ "jp_tokenized": "word-alignment/data_ja.ja-tok",
322
+ }
323
+ for k in keys:
324
+ file_path = os.path.join(basepath, f"Japanese-ALT-20170330", keys[k])
325
+ fin = open(file_path, encoding="utf-8")
326
+ for line in fin:
327
+ line = line.strip()
328
+ sp = line.split("\t")
329
+
330
+ # Note that Japanese and Myanmar texts have empty sentence fields in this release.
331
+ if len(sp) < 2:
332
+ continue
333
+
334
+ _, urlid, sntid = sp[0].split(".")
335
+ if urlid not in allow_urls:
336
+ continue
337
+
338
+ if sntid in data:
339
+
340
+ data[sntid][k] = sp[1]
341
+ fin.close()
342
+
343
+ elif self.config.name == "alt-my":
344
+ data = {}
345
+ for fname in ["data"]:
346
+ file_path = os.path.join(basepath, f"my-alt-190530", fname)
347
+ fin = open(file_path, encoding="utf-8")
348
+ for line in fin:
349
+ line = line.strip()
350
+ sp = line.split("\t")
351
+ _, urlid, sntid = sp[0].split(".")
352
+ if urlid not in allow_urls:
353
+ continue
354
+
355
+ data[sntid] = {
356
+ "SNT.URLID": urlid,
357
+ "SNT.URLID.SNTID": sntid,
358
+ "url": allow_urls[urlid]["url"],
359
+ "value": sp[1],
360
+ }
361
+ fin.close()
362
+
363
+ elif self.config.name == "alt-km":
364
+ data = {}
365
+ for fname in ["data_km.km-tag.nova", "data_km.km-tok.nova"]:
366
+ file_path = os.path.join(basepath, f"km-nova-181101", fname)
367
+ fin = open(file_path, encoding="utf-8")
368
+ for line in fin:
369
+ line = line.strip()
370
+ sp = line.split("\t")
371
+ _, urlid, sntid = sp[0].split(".")
372
+ if urlid not in allow_urls:
373
+ continue
374
+
375
+ k = "km_pos_tag" if fname == "data_km.km-tag.nova" else "km_tokenized"
376
+ if sntid in data:
377
+ data[sntid][k] = sp[1]
378
+ else:
379
+ data[sntid] = {
380
+ "SNT.URLID": urlid,
381
+ "SNT.URLID.SNTID": sntid,
382
+ "url": allow_urls[urlid]["url"],
383
+ "km_pos_tag": None,
384
+ "km_tokenized": None,
385
+ }
386
+ data[sntid][k] = sp[1]
387
+ fin.close()
388
+
389
+ elif self.config.name == "alt-my-transliteration":
390
+ file_path = os.path.join(basepath, f"my-en-transliteration", "data.txt")
391
+ # Need to set errors='ignore' because of the unknown error
392
+ # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
393
+ # It might due to some issues related to Myanmar alphabets
394
+ fin = open(file_path, encoding="utf-8", errors="ignore")
395
+ _id = 0
396
+ for line in fin:
397
+ line = line.strip()
398
+
399
+ # I don't know why there are \x00 between |||. They don't show in the editor.
400
+ line = line.replace("\x00", "")
401
+ sp = line.split("|||")
402
+
403
+ # When I read data, it seems to have empty sentence betweem the actual sentence. Don't know why?
404
+ if len(sp) < 2:
405
+ continue
406
+
407
+ data[_id] = {"en": sp[0].strip(), "my": [sp[1].strip()]}
408
+ _id += 1
409
+ fin.close()
410
+ elif self.config.name == "alt-my-west-transliteration":
411
+ file_path = os.path.join(basepath, f"western-myanmar-transliteration", "321.txt")
412
+ # Need to set errors='ignore' because of the unknown error
413
+ # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
414
+ # It might due to some issues related to Myanmar alphabets
415
+ fin = open(file_path, encoding="utf-8", errors="ignore")
416
+ _id = 0
417
+ for line in fin:
418
+ line = line.strip()
419
+ line = line.replace("\x00", "")
420
+ sp = line.split("|||")
421
+
422
+ data[_id] = {"en": sp[0].strip(), "my": [l.strip() for l in sp[1].split("|")]}
423
+ _id += 1
424
+ fin.close()
425
+
426
+ _id = 1
427
+ for k in data:
428
+ yield _id, data[k]
429
+ _id += 1
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"alt-parallel": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "translation": {"languages": ["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"], "id": null, "_type": "Translation"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-parallel", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 41438158, "num_examples": 18094, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 2693446, "num_examples": 1004, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 3816979, "num_examples": 1019, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206.zip": {"num_bytes": 21105607, "checksum": "05f7b31b517d4c4e074bb7fb57277758c0e3e15d1ad9cfc5727e9bce79b07bbd"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 21285784, "post_processing_size": null, "dataset_size": 47948583, "size_in_bytes": 69234367}, "alt-en": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "status": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-en", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 10075609, "num_examples": 17889, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 544739, "num_examples": 988, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 567292, "num_examples": 1017, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/English-ALT-20170107.zip": {"num_bytes": 2558878, "checksum": "c1d7dcbbf5548cfad9232c07464ff4bb0cf5fb2cd0c00af53cf5fa02a02594f0"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 2739055, "post_processing_size": null, "dataset_size": 11187640, "size_in_bytes": 13926695}, "alt-jp": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "status": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}, "word_alignment": {"dtype": "string", "id": null, "_type": "Value"}, "jp_tokenized": {"dtype": "string", "id": null, "_type": "Value"}, "en_tokenized": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-jp", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 21891867, "num_examples": 17202, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 1181587, "num_examples": 953, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 1175624, "num_examples": 931, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/Japanese-ALT-20170330.zip": {"num_bytes": 11827822, "checksum": "7749af9f337fcbf09dffffc2d5314ea5757a91ffb199aaa4f027467a3ecd805e"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 12007999, "post_processing_size": null, "dataset_size": 24249078, "size_in_bytes": 36257077}, "alt-my": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "value": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-my", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 20433275, "num_examples": 18088, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 1111410, "num_examples": 1000, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 1135209, "num_examples": 1018, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-alt-190530.zip": {"num_bytes": 2848125, "checksum": "d77ef18364bcb2b149503a5ed77734b07b103bd277f8ed92716555f3deedaf95"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 3028302, "post_processing_size": null, "dataset_size": 22679894, "size_in_bytes": 25708196}, "alt-km": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"SNT.URLID": {"dtype": "string", "id": null, "_type": "Value"}, "SNT.URLID.SNTID": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "km_pos_tag": {"dtype": "string", "id": null, "_type": "Value"}, "km_tokenized": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-km", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12015411, "num_examples": 18088, "dataset_name": "alt"}, "validation": {"name": "validation", "num_bytes": 655232, "num_examples": 1000, "dataset_name": "alt"}, "test": {"name": "test", "num_bytes": 673753, "num_examples": 1018, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/km-nova-181101.zip": {"num_bytes": 2230655, "checksum": "0c6457d4a3327f3dc0b381704cbad71af120e963bfa1cdb06765fa0ed0c9098a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-train.txt": {"num_bytes": 161862, "checksum": "d57d680eebc9823b65c74c5de95320f17c3a5ead94bfa66a6849f3ed0cdd411a"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-dev.txt": {"num_bytes": 9082, "checksum": "e3d35c2f54e204216011a2509925b359c5712c768c2b17bc74e19b8d4ec7e50d"}, "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/URL-test.txt": {"num_bytes": 9233, "checksum": "6d67d6bf5c4e7574116355d71ef927c66aca2f7ab7267b14591ea250f24ec722"}}, "download_size": 2410832, "post_processing_size": null, "dataset_size": 13344396, "size_in_bytes": 15755228}, "alt-my-transliteration": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"en": {"dtype": "string", "id": null, "_type": "Value"}, "my": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-my-transliteration", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4249424, "num_examples": 84022, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/my-en-transliteration.zip": {"num_bytes": 1232127, "checksum": "5b348c0f9e92d4699fddb4c64fd7d929eb6f6de6f7ce4d879bf91e8d4a82f063"}}, "download_size": 1232127, "post_processing_size": null, "dataset_size": 4249424, "size_in_bytes": 5481551}, "alt-my-west-transliteration": {"description": "The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese).\n", "citation": "@inproceedings{riza2016introduction,\n title={Introduction of the asian language treebank},\n author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others},\n booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)},\n pages={1--6},\n year={2016},\n organization={IEEE}\n}\n", "homepage": "https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/", "license": "", "features": {"en": {"dtype": "string", "id": null, "_type": "Value"}, "my": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "alt", "config_name": "alt-my-west-transliteration", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7412043, "num_examples": 107121, "dataset_name": "alt"}}, "download_checksums": {"https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/western-myanmar-transliteration.zip": {"num_bytes": 2830071, "checksum": "c3f1419022d823791b6d85b259a18ab11d8f8800367d7ec4319e49fc016ec396"}}, "download_size": 2830071, "post_processing_size": null, "dataset_size": 7412043, "size_in_bytes": 10242114}}
dummy/alt-en/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a5e039367a0d6506863f5521fd10cba690e5a4791fbbe98a3302a9287138a99
3
+ size 3443
dummy/alt-jp/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7963d8dcc735bcf78050cfd2922a752ebc19734632b415af16e02571e8ae3dff
3
+ size 7962
dummy/alt-km/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c01f6c717a7e2e5573347f5d0d8b829719d68712c8587a634e38f5cc1c69b0c4
3
+ size 2754
dummy/alt-my-transliteration/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:778ede10327ccb1637b3498efa40c9ff454483e7d45cee5f7b515dd4440f0859
3
+ size 944
dummy/alt-my-west-transliteration/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:769d995fed788e5b9d4c0c02333f5c96fe9370ea29b4cbcf25f2fc8d5c2d0bbf
3
+ size 1806
dummy/alt-my/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac5309d4fde9536f48f571d29e7fa8ae773852941bfe0363d5795bc51b861c4e
3
+ size 2701
dummy/alt-parallel/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cd0b8ac4e604549f7d95f3f727e8ee42010edabbd1b6c83703ad980e5cef67e
3
+ size 13079