parquet-converter commited on
Commit
5ca5dc2
·
1 Parent(s): ebac932

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,282 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - machine-generated
6
- - expert-generated
7
- language:
8
- - ko
9
- license:
10
- - cc-by-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 100K<n<1M
15
- source_datasets:
16
- - extended|multi_nli
17
- - extended|snli
18
- - extended|xnli
19
- task_categories:
20
- - text-classification
21
- task_ids:
22
- - natural-language-inference
23
- - multi-input-text-classification
24
- paperswithcode_id: kornli
25
- pretty_name: KorNLI
26
- dataset_info:
27
- - config_name: multi_nli
28
- features:
29
- - name: premise
30
- dtype: string
31
- - name: hypothesis
32
- dtype: string
33
- - name: label
34
- dtype:
35
- class_label:
36
- names:
37
- 0: entailment
38
- 1: neutral
39
- 2: contradiction
40
- splits:
41
- - name: train
42
- num_bytes: 84729207
43
- num_examples: 392702
44
- download_size: 42113232
45
- dataset_size: 84729207
46
- - config_name: snli
47
- features:
48
- - name: premise
49
- dtype: string
50
- - name: hypothesis
51
- dtype: string
52
- - name: label
53
- dtype:
54
- class_label:
55
- names:
56
- 0: entailment
57
- 1: neutral
58
- 2: contradiction
59
- splits:
60
- - name: train
61
- num_bytes: 80137097
62
- num_examples: 550152
63
- download_size: 42113232
64
- dataset_size: 80137097
65
- - config_name: xnli
66
- features:
67
- - name: premise
68
- dtype: string
69
- - name: hypothesis
70
- dtype: string
71
- - name: label
72
- dtype:
73
- class_label:
74
- names:
75
- 0: entailment
76
- 1: neutral
77
- 2: contradiction
78
- splits:
79
- - name: validation
80
- num_bytes: 518830
81
- num_examples: 2490
82
- - name: test
83
- num_bytes: 1047437
84
- num_examples: 5010
85
- download_size: 42113232
86
- dataset_size: 1566267
87
- ---
88
-
89
- # Dataset Card for "kor_nli"
90
-
91
- ## Table of Contents
92
- - [Dataset Description](#dataset-description)
93
- - [Dataset Summary](#dataset-summary)
94
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
95
- - [Languages](#languages)
96
- - [Dataset Structure](#dataset-structure)
97
- - [Data Instances](#data-instances)
98
- - [Data Fields](#data-fields)
99
- - [Data Splits](#data-splits)
100
- - [Dataset Creation](#dataset-creation)
101
- - [Curation Rationale](#curation-rationale)
102
- - [Source Data](#source-data)
103
- - [Annotations](#annotations)
104
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
105
- - [Considerations for Using the Data](#considerations-for-using-the-data)
106
- - [Social Impact of Dataset](#social-impact-of-dataset)
107
- - [Discussion of Biases](#discussion-of-biases)
108
- - [Other Known Limitations](#other-known-limitations)
109
- - [Additional Information](#additional-information)
110
- - [Dataset Curators](#dataset-curators)
111
- - [Licensing Information](#licensing-information)
112
- - [Citation Information](#citation-information)
113
- - [Contributions](#contributions)
114
-
115
- ## Dataset Description
116
-
117
- - **Homepage:** [https://github.com/kakaobrain/KorNLUDatasets](https://github.com/kakaobrain/KorNLUDatasets)
118
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
- - **Size of downloaded dataset files:** 120.49 MB
122
- - **Size of the generated dataset:** 158.72 MB
123
- - **Total amount of disk used:** 279.21 MB
124
-
125
- ### Dataset Summary
126
-
127
- Korean Natural Language Inference datasets.
128
-
129
- ### Supported Tasks and Leaderboards
130
-
131
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
132
-
133
- ### Languages
134
-
135
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
136
-
137
- ## Dataset Structure
138
-
139
- ### Data Instances
140
-
141
- #### multi_nli
142
-
143
- - **Size of downloaded dataset files:** 40.16 MB
144
- - **Size of the generated dataset:** 80.80 MB
145
- - **Total amount of disk used:** 120.97 MB
146
-
147
- An example of 'train' looks as follows.
148
- ```
149
-
150
- ```
151
-
152
- #### snli
153
-
154
- - **Size of downloaded dataset files:** 40.16 MB
155
- - **Size of the generated dataset:** 76.42 MB
156
- - **Total amount of disk used:** 116.59 MB
157
-
158
- An example of 'train' looks as follows.
159
- ```
160
-
161
- ```
162
-
163
- #### xnli
164
-
165
- - **Size of downloaded dataset files:** 40.16 MB
166
- - **Size of the generated dataset:** 1.49 MB
167
- - **Total amount of disk used:** 41.66 MB
168
-
169
- An example of 'validation' looks as follows.
170
- ```
171
-
172
- ```
173
-
174
- ### Data Fields
175
-
176
- The data fields are the same among all splits.
177
-
178
- #### multi_nli
179
- - `premise`: a `string` feature.
180
- - `hypothesis`: a `string` feature.
181
- - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
182
-
183
- #### snli
184
- - `premise`: a `string` feature.
185
- - `hypothesis`: a `string` feature.
186
- - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
187
-
188
- #### xnli
189
- - `premise`: a `string` feature.
190
- - `hypothesis`: a `string` feature.
191
- - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
192
-
193
- ### Data Splits
194
-
195
- #### multi_nli
196
-
197
- | |train |
198
- |---------|-----:|
199
- |multi_nli|392702|
200
-
201
- #### snli
202
-
203
- | |train |
204
- |----|-----:|
205
- |snli|550152|
206
-
207
- #### xnli
208
-
209
- | |validation|test|
210
- |----|---------:|---:|
211
- |xnli| 2490|5010|
212
-
213
- ## Dataset Creation
214
-
215
- ### Curation Rationale
216
-
217
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
218
-
219
- ### Source Data
220
-
221
- #### Initial Data Collection and Normalization
222
-
223
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
224
-
225
- #### Who are the source language producers?
226
-
227
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
228
-
229
- ### Annotations
230
-
231
- #### Annotation process
232
-
233
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
234
-
235
- #### Who are the annotators?
236
-
237
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
238
-
239
- ### Personal and Sensitive Information
240
-
241
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
242
-
243
- ## Considerations for Using the Data
244
-
245
- ### Social Impact of Dataset
246
-
247
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
248
-
249
- ### Discussion of Biases
250
-
251
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
252
-
253
- ### Other Known Limitations
254
-
255
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
-
257
- ## Additional Information
258
-
259
- ### Dataset Curators
260
-
261
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
-
263
- ### Licensing Information
264
-
265
- The dataset is licensed under Creative Commons [Attribution-ShareAlike license (CC BY-SA 4.0)](http://creativecommons.org/licenses/by-sa/4.0/).
266
-
267
- ### Citation Information
268
-
269
- ```
270
- @article{ham2020kornli,
271
- title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
272
- author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
273
- journal={arXiv preprint arXiv:2004.03289},
274
- year={2020}
275
- }
276
-
277
- ```
278
-
279
-
280
- ### Contributions
281
-
282
- Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"multi_nli": {"description": " Korean Natural Language Inference datasets\n", "citation": "@article{ham2020kornli,\n title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},\n author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},\n journal={arXiv preprint arXiv:2004.03289},\n year={2020}\n}\n", "homepage": "https://github.com/kakaobrain/KorNLUDatasets", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "kor_nli", "config_name": "multi_nli", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 84729207, "num_examples": 392702, "dataset_name": "kor_nli"}}, "download_checksums": {"https://github.com/kakaobrain/KorNLUDatasets/archive/master.zip": {"num_bytes": 42113232, "checksum": "b1184d5e78a7d988400eabe3374b8a7e2abf182896f54e6e311c5173bb2c9bf5"}}, "download_size": 42113232, "post_processing_size": null, "dataset_size": 84729207, "size_in_bytes": 126842439}, "snli": {"description": " Korean Natural Language Inference datasets\n", "citation": "@article{ham2020kornli,\n title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},\n author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},\n journal={arXiv preprint arXiv:2004.03289},\n year={2020}\n}\n", "homepage": "https://github.com/kakaobrain/KorNLUDatasets", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "kor_nli", "config_name": "snli", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 80137097, "num_examples": 550152, "dataset_name": "kor_nli"}}, "download_checksums": {"https://github.com/kakaobrain/KorNLUDatasets/archive/master.zip": {"num_bytes": 42113232, "checksum": "b1184d5e78a7d988400eabe3374b8a7e2abf182896f54e6e311c5173bb2c9bf5"}}, "download_size": 42113232, "post_processing_size": null, "dataset_size": 80137097, "size_in_bytes": 122250329}, "xnli": {"description": " Korean Natural Language Inference datasets\n", "citation": "@article{ham2020kornli,\n title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},\n author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},\n journal={arXiv preprint arXiv:2004.03289},\n year={2020}\n}\n", "homepage": "https://github.com/kakaobrain/KorNLUDatasets", "license": "", "features": {"premise": {"dtype": "string", "id": null, "_type": "Value"}, "hypothesis": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 3, "names": ["entailment", "neutral", "contradiction"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "kor_nli", "config_name": "xnli", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"validation": {"name": "validation", "num_bytes": 518830, "num_examples": 2490, "dataset_name": "kor_nli"}, "test": {"name": "test", "num_bytes": 1047437, "num_examples": 5010, "dataset_name": "kor_nli"}}, "download_checksums": {"https://github.com/kakaobrain/KorNLUDatasets/archive/master.zip": {"num_bytes": 42113232, "checksum": "b1184d5e78a7d988400eabe3374b8a7e2abf182896f54e6e311c5173bb2c9bf5"}}, "download_size": 42113232, "post_processing_size": null, "dataset_size": 1566267, "size_in_bytes": 43679499}}
 
 
kor_nli.py DELETED
@@ -1,121 +0,0 @@
1
- """TODO(kor_nli): Add a description here."""
2
-
3
-
4
- import os
5
-
6
- import datasets
7
-
8
-
9
- # TODO(kor_nli): BibTeX citation
10
- _CITATION = """\
11
- @article{ham2020kornli,
12
- title={KorNLI and KorSTS: New Benchmark Datasets for Korean Natural Language Understanding},
13
- author={Ham, Jiyeon and Choe, Yo Joong and Park, Kyubyong and Choi, Ilji and Soh, Hyungjoon},
14
- journal={arXiv preprint arXiv:2004.03289},
15
- year={2020}
16
- }
17
- """
18
-
19
- # TODO(kor_nli):
20
- _DESCRIPTION = """ Korean Natural Language Inference datasets
21
- """
22
- _URL = "https://github.com/kakaobrain/KorNLUDatasets/archive/master.zip"
23
-
24
-
25
- class KorNLIConfig(datasets.BuilderConfig):
26
- """BuilderConfig for KorNLI."""
27
-
28
- def __init__(self, **kwargs):
29
- """BuilderConfig for KorNLI.
30
-
31
- Args:
32
-
33
- **kwargs: keyword arguments forwarded to super.
34
- """
35
- # Version 1.1.0 remove empty document and summary strings.
36
- super(KorNLIConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
37
-
38
-
39
- class KorNli(datasets.GeneratorBasedBuilder):
40
- """TODO(kor_nli): Short description of my dataset."""
41
-
42
- # TODO(kor_nli): Set up version.
43
- VERSION = datasets.Version("1.0.0")
44
- BUILDER_CONFIGS = [
45
- KorNLIConfig(name="multi_nli", description="Korean multi NLI datasets"),
46
- KorNLIConfig(name="snli", description="Korean SNLI dataset"),
47
- KorNLIConfig(name="xnli", description="Korean XNLI dataset"),
48
- ]
49
-
50
- def _info(self):
51
- # TODO(kor_nli): Specifies the datasets.DatasetInfo object
52
- return datasets.DatasetInfo(
53
- # This is the description that will appear on the datasets page.
54
- description=_DESCRIPTION,
55
- # datasets.features.FeatureConnectors
56
- features=datasets.Features(
57
- {
58
- # These are the features of your dataset like images, labels ...
59
- "premise": datasets.Value("string"),
60
- "hypothesis": datasets.Value("string"),
61
- "label": datasets.ClassLabel(names=["entailment", "neutral", "contradiction"]),
62
- }
63
- ),
64
- # If there's a common (input, target) tuple from the features,
65
- # specify them here. They'll be used if as_supervised=True in
66
- # builder.as_dataset.
67
- supervised_keys=None,
68
- # Homepage of the dataset for documentation
69
- homepage="https://github.com/kakaobrain/KorNLUDatasets",
70
- citation=_CITATION,
71
- )
72
-
73
- def _split_generators(self, dl_manager):
74
- """Returns SplitGenerators."""
75
- # TODO(kor_nli): Downloads the data and defines the splits
76
- # dl_manager is a datasets.download.DownloadManager that can be used to
77
- # download and extract URLs
78
- dl_dir = dl_manager.download_and_extract(_URL)
79
- dl_dir = os.path.join(dl_dir, "KorNLUDatasets-master", "KorNLI")
80
- if self.config.name == "multi_nli":
81
- return [
82
- datasets.SplitGenerator(
83
- name=datasets.Split.TRAIN,
84
- # These kwargs will be passed to _generate_examples
85
- gen_kwargs={"filepath": os.path.join(dl_dir, "multinli.train.ko.tsv")},
86
- ),
87
- ]
88
- elif self.config.name == "snli":
89
- return [
90
- datasets.SplitGenerator(
91
- name=datasets.Split.TRAIN,
92
- # These kwargs will be passed to _generate_examples
93
- gen_kwargs={"filepath": os.path.join(dl_dir, "snli_1.0_train.ko.tsv")},
94
- ),
95
- ]
96
- else:
97
- return [
98
- datasets.SplitGenerator(
99
- name=datasets.Split.VALIDATION,
100
- # These kwargs will be passed to _generate_examples
101
- gen_kwargs={"filepath": os.path.join(dl_dir, "xnli.dev.ko.tsv")},
102
- ),
103
- datasets.SplitGenerator(
104
- name=datasets.Split.TEST,
105
- # These kwargs will be passed to _generate_examples
106
- gen_kwargs={"filepath": os.path.join(dl_dir, "xnli.test.ko.tsv")},
107
- ),
108
- ]
109
-
110
- def _generate_examples(self, filepath):
111
- """Yields examples."""
112
- # TODO(kor_nli): Yields (key, example) tuples from the dataset
113
- with open(filepath, encoding="utf-8") as f:
114
- next(f) # skip headers
115
- columns = ("premise", "hypothesis", "label")
116
- for id_, row in enumerate(f):
117
- row = row.strip().split("\t")
118
- if len(row) != 3:
119
- continue
120
- row = dict(zip(columns, row))
121
- yield id_, row
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
multi_nli/kor_nli-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05d8b63f55e600ad0d0fb0768396d48de9b2ba46d4318e7ba50b0adc49ee7f1c
3
+ size 54693609
snli/kor_nli-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e81daab9808fb1596bdb8caeb5a7229201005cc07875c4baaf40cb2095c79bf6
3
+ size 22015954
xnli/kor_nli-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efefa5758cc6b681bca03087fa7c8a7dd5148ed18ac4e1ebd1d3e90139a30278
3
+ size 351478
xnli/kor_nli-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0de7d9caebe615f05c0fb8065652ec0bac07904ad424f45650113e580a4e9d8
3
+ size 177841