Felix commited on
Commit
c7a5837
1 Parent(s): 1fb0864

add config builder file

Browse files
Files changed (1) hide show
  1. superlim-2.py +272 -0
superlim-2.py ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+
3
+ _CITATION = """\
4
+ """
5
+
6
+ # You can copy an official description
7
+ _DESCRIPTION = """\
8
+ """
9
+
10
+ _HOMEPAGE = ""
11
+
12
+ _LICENSE = ""
13
+
14
+ _SUPERLIM_CITATION = """\
15
+ Yvonne Adesam, Aleksandrs Berdicevskis, Felix Morger (2020): SwedishGLUE – Towards a Swedish Test Set for Evaluating Natural Language Understanding Models BibTeX
16
+ [1] Original Absabank:
17
+ Jacobo Rouces, Lars Borin, Nina Tahmasebi (2020): Creating an Annotated Corpus for Aspect-Based Sentiment Analysis in Swedish, in Proceedings of the 5th conference in Digital Humanities in the Nordic Countries, Riga, Latvia, October 21-23, 2020. BibTeX
18
+ [2] DaLAJ:
19
+ Volodina, Elena, Yousuf Ali Mohammed, and Julia Klezl (2021). DaLAJ - a dataset for linguistic acceptability judgments for Swedish. In Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021). Linköping Electronic Conference Proceedings 177:3, s. 28-37. https://ep.liu.se/ecp/177/003/ecp2021177003.pdf
20
+ [3] Analogy:
21
+ Tosin Adewumi, Foteini Liwicki, Markus Liwicki. (2020). Corpora compared: The case of the Swedish Gigaword & Wikipedia corpora. In: Proceedings of the 8th SLTC, Gothenburg. arXiv preprint arXiv:2011.03281
22
+ [4] Swedish Test Set for SemEval 2020 Task 1:
23
+ Unsupervised Lexical Semantic Change Detection: Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, Nina Tahmasebi (2020): SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection, in Proceedings of the Fourteenth Workshop on Semantic Evaluation (SemEval2020), Barcelona, Spain (Online), December 12, 2020. BibTeX
24
+ [5] Winogender:
25
+ Saga Hansson, Konstantinos Mavromatakis, Yvonne Adesam, Gerlof Bouma and Dana Dannélls (2021). The Swedish Winogender Dataset. In The 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021), Reykjavik.
26
+ [6] SuperSim:
27
+ Hengchen, Simon and Tahmasebi, Nina (2021). SuperSim: a test set for word similarity and relatedness in Swedish. In The 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021), Reykjavik. arXiv preprint arXiv:2014.05228
28
+ """
29
+
30
+ _SUPERLIM_DESCRIPTION = """\
31
+ SuperLim, A standardized suite for evaluation and analysis of Swedish natural language understanding systems.
32
+ """
33
+ _DaLAJ_DESCRIPTION = """\
34
+ Determine whether a sentence is correct Swedish or not.
35
+ """
36
+ _DaLAJ_CITATION = """\
37
+ [1] Original Absabank:
38
+ Jacobo Rouces, Lars Borin, Nina Tahmasebi (2020): Creating an Annotated Corpus for Aspect-Based Sentiment Analysis in Swedish, in Proceedings of the 5th conference in Digital Humanities in the Nordic Countries, Riga, Latvia, October 21-23, 2020. BibTeX
39
+ [2] DaLAJ:
40
+ Volodina, Elena, Yousuf Ali Mohammed, and Julia Klezl (2021). DaLAJ - a dataset for linguistic acceptability judgments for Swedish. In Proceedings of the 10th Workshop on Natural Language Processing for Computer Assisted Language Learning (NLP4CALL 2021). Linköping Electronic Conference Proceedings 177:3, s. 28-37. https://ep.liu.se/ecp/177/003/ecp2021177003.pdf
41
+ """
42
+
43
+ _SweAna_DESCRIPTION = """\
44
+ The Swedish analogy test set follows the format of the original Google version. However, it is bigger and balanced across the 2 major categories,
45
+ having a total of 20,638 samples, made up of 10,381 semantic and 10,257 syntactic samples. It is also roughly balanced across the syntactic subsections.
46
+ There are 5 semantic subsections and 6 syntactic subsections. The dataset was constructed, partly using the samples in the English version,
47
+ with the help of tools dedicated to Swedish translation and it was proof-read for corrections by two native speakers (with a percentage agreement of 98.93\%)."""
48
+ _SweAna_CITATION = """\
49
+ [1] Original Absabank:
50
+ Jacobo Rouces, Lars Borin, Nina Tahmasebi (2020): Creating an Annotated Corpus for Aspect-Based Sentiment Analysis in Swedish, in Proceedings of the 5th conference in Digital Humanities in the Nordic Countries, Riga, Latvia, October 21-23, 2020. BibTeX
51
+ """
52
+
53
+ _SweDiag_DESCRIPTION = """\
54
+ Färdig preliminär översättning av SuperGLUE diagnostik. Datan innehåller alla ursprungliga annoterade satspar från SuperGLUE tillsammans
55
+ med deras svenska översättningar."""
56
+ _SweDiag_CITATION = """\
57
+ """
58
+ _SweFaq_DESCRIPTION = """\
59
+ Vanliga frågor från svenska myndigheters webbsidor med svar i randomiserad ordning"""
60
+ _SweFaq_CITATION = """\
61
+ """
62
+ _SweFracas_DESCRIPTION = """\
63
+ A textual inference/entailment problem set, derived from FraCas. The original English Fracas [1] was converted to html and edited by Bill MacCartney [2],
64
+ and then automatically translated to Swedish by Peter Ljunglöf and Magdalena Siverbo [3]. The current tabular form of the set was created by Aleksandrs Berdicevskis
65
+ by merging the Swedish and English versions and removing some of the problems. Finally, Lars Borin went through all the translations, correcting and Swedifying them manually.
66
+ As a result, many translations are rather liberal and diverge noticeably from the English original."""
67
+ _SweFracas_CITATION = """\
68
+ """
69
+ _SwePar_DESCRIPTION = """\
70
+ SweParaphrase is a subset of the automatically translated Swedish Semantic Textual Similarity dataset (Isbister and Sahlgren, 2020).
71
+ It consists of 165 manually corrected Swedish sentence pairs paired with the original English sentences and their similarity scores
72
+ ranging between 0 (no meaning overlap) and 5 (meaning equivalence). These scores were taken from the English data, they were assigned
73
+ by Crowdsourcing through Mechanical Turk. Each sentence pair belongs to one genre (e.g. news, forums or captions).
74
+ The task is to determine how similar two sentences are."""
75
+ _SwePar_CITATION = """\
76
+ """
77
+ _SweSat_DESCRIPTION = """\
78
+ The dataset provides a gold standard for Swedish word synonymy/definition. The test items are collected from the Swedish Scholastic
79
+ Aptitude Test (högskoleprovet), currently spanning the years 2006--2021 and 822 vocabulary test items. The task for the tested system
80
+ is to determine which synonym or definition of five alternatives is correct for each test item.
81
+ """
82
+ _SweSat_CITATION = """\
83
+ """
84
+
85
+ _SweSim_DESCRIPTION = """\
86
+ SuperSim is a large-scale similarity and relatedness test set for Swedish built with expert human judgments. The test set is composed of 1360 word-pairs independently judged for both relatedness and similarity by five annotators."""
87
+
88
+ _SweWgr_DESCRIPTION = """\
89
+ The SweWinogender test set is diagnostic dataset to measure gender bias in coreference resolution. It is modelled after the English Winogender benchmark,
90
+ and is released with reference statistics on the distribution of men and women between occupations and the association between gender and occupation in modern corpus material."""
91
+
92
+ _SweWsc_DESCRIPTION = """\
93
+ SweWinograd is a pronoun resolution test set, containing constructed items in the style of Winograd schema’s. The interpretation of the target pronouns is determined by (common sense)
94
+ reasoning and knowledge, and not by syntactic constraints, lexical distributional information or discourse structuring patterns.
95
+ The dataset contains 90 multiple choice with multiple correct answers test items."""
96
+
97
+ _SweWic_DESCRIPTION = """\
98
+ The Swedish Word-in-Context dataset provides a benchmark for evaluating distributional models of word meaning, in particular context-sensitive/dynamic models. Constructed following the principles of the (English)
99
+ Word-in-Context dataset, SweWiC consists of 1000 sentence pairs, where each sentence in a pair contains an occurence of a potentially ambiguous focus word specific to that pair. The question posed to the tested
100
+ system is whether these two occurrences represent instances of the same word sense. There are 500 same-sense pairs and 500 different-sense pairs."""
101
+
102
+ # TODO: Add link to the official dataset URLs here
103
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
104
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
105
+ _URL = "https://huggingface.co/datasets/sbx/superlim-2/tree/main/data/"
106
+ _TASKS = {
107
+ "absabank": "ABSAbank-lmm",
108
+ "dalaj": "DaLAJ",
109
+ "swesim_relatedness": "SuperSim_relatedness",
110
+ "swesim_similarity": "SuperSim_similarity",
111
+ "sweana": "SweAnalogy",
112
+ "swefaq": "SweFAQ",
113
+ "swepar": "SweParaphrase",
114
+ "swesat": "SweSAT-synonyms",
115
+ "swewic": "SweWIC"
116
+ }
117
+
118
+
119
+ class SuperLimConfig(datasets.BuilderConfig):
120
+ """BuilderConfig for SuperLim."""
121
+
122
+ def __init__(self, features, data_url, citation, url, label_classes=("False", "True"), **kwargs):
123
+ """BuilderConfig for SuperLim.
124
+
125
+ Args:
126
+ features: `list[string]`, list of the features that will appear in the
127
+ feature dict. Should not include "label".
128
+ data_url: `string`, url to download the zip file from.
129
+ citation: `string`, citation for the data set.
130
+ url: `string`, url for information about the data set.
131
+ label_classes: `list[string]`, the list of classes for the label if the
132
+ label is present as a string. Non-string labels will be cast to either
133
+ 'False' or 'True'.
134
+ **kwargs: keyword arguments forwarded to super.
135
+ """
136
+ # Version history:
137
+ # 1.0.2: Fixed non-nondeterminism in ReCoRD.
138
+ # 1.0.1: Change from the pre-release trial version of SuperLim (v1.9) to
139
+ # the full release (v2.0).
140
+ # 1.0.0: S3 (new shuffling, sharding and slicing mechanism).
141
+ # 0.0.2: Initial version.
142
+ super(SuperLimConfig, self).__init__(version=datasets.Version("2.0.0"), **kwargs)
143
+ self.features = features
144
+ self.label_classes = label_classes
145
+ self.data_url = data_url
146
+ self.citation = citation
147
+ self.url = url
148
+
149
+ class SuperLim(datasets.GeneratorBasedBuilder):
150
+ """The SuperLim benchmark."""
151
+
152
+ VERSION = datasets.Version("2.0.0")
153
+
154
+ BUILDER_CONFIGS = [
155
+ datasets.BuilderConfig(name="absabank", version=VERSION, description=_DaLAJ_DESCRIPTION),
156
+ datasets.BuilderConfig(name="dalaj", version=VERSION, description=_DaLAJ_DESCRIPTION),
157
+ datasets.BuilderConfig(name="swesim_relatedness", version=VERSION, description=_SweSim_DESCRIPTION),
158
+ datasets.BuilderConfig(name="swesim_similarity", version=VERSION, description=_SweSim_DESCRIPTION),
159
+ datasets.BuilderConfig(name="sweana", version=VERSION, description=_SweAna_DESCRIPTION),
160
+ datasets.BuilderConfig(name="swefaq", version=VERSION, description=_SweFaq_DESCRIPTION),
161
+ datasets.BuilderConfig(name="swepar", version=VERSION, description=_SwePar_DESCRIPTION),
162
+ datasets.BuilderConfig(name="swesat", version=VERSION, description=_SweSat_DESCRIPTION),
163
+ datasets.BuilderConfig(name="swewic", version=VERSION, description=_SweWic_DESCRIPTION)
164
+ ]
165
+
166
+ def _info(self):
167
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
168
+ if self.config.name == "dalaj": # This is the name of the configuration selected in BUILDER_CONFIGS above
169
+ features = datasets.Features(
170
+ {
171
+ "original_sentence": datasets.Value("string"),
172
+ "corrected_sentence": datasets.Value("string"),
173
+ "error_indices": datasets.Value("string"),
174
+ "corrected_indices": datasets.Value("string"),
175
+ "error_corr_pair": datasets.Value("string"),
176
+ "error_label": datasets.Value("string"),
177
+ "l1": datasets.Value("string"),
178
+ "approximate_level": datasets.Value("string"),
179
+ # These are the features of your dataset like images, labels ...
180
+ }
181
+ )
182
+ elif self.config.name == 'absabank':
183
+ features = datasets.Features(
184
+ {
185
+ "text": datasets.Value("string"),
186
+ "label": datasets.Value(dtype='float32')
187
+ }
188
+ )
189
+ elif self.config.name == "sweana":
190
+ features = datasets.Features(
191
+ {
192
+ "a": datasets.Value("string"),
193
+ "b": datasets.Value("string"),
194
+ "c": datasets.Value("string"),
195
+ "d": datasets.Value("string"),
196
+ "relation": datasets.Value("string"),
197
+ }
198
+ )
199
+ elif self.config.name == "swefaq":
200
+ features = datasets.Features(
201
+ {
202
+ "question": datasets.Value("string"),
203
+ "candidate_answer": datasets.Value("string"),
204
+ "correct_answer": datasets.Value("string"),
205
+ }
206
+ )
207
+ elif self.config.name == "swepar":
208
+ features = datasets.Features(
209
+ {
210
+ "sentence_1": datasets.Value("string"),
211
+ "sentence_2": datasets.Value("string"),
212
+ "similarity_score": datasets.Value("string"),
213
+ }
214
+ )
215
+ elif self.config.name == "swesat":
216
+ features = datasets.Features(
217
+ {
218
+ "target_item": datasets.Value("string"),
219
+ "answer_1": datasets.Value("string"),
220
+ "answer_2": datasets.Value("string"),
221
+ "answer_3": datasets.Value("string"),
222
+ "answer_4": datasets.Value("string"),
223
+ "answer_5": datasets.Value("string"),
224
+ }
225
+ )
226
+ elif self.config.name == "swesim_relatedness":
227
+ features = datasets.Features(
228
+ {
229
+ "word_1": datasets.Value("string"),
230
+ "word_2": datasets.Value("string"),
231
+ "relatedness": datasets.Value("string"),
232
+ }
233
+ )
234
+ elif self.config.name == "swesim_similarity":
235
+ features = datasets.Features(
236
+ {
237
+ "word_1": datasets.Value("string"),
238
+ "word_2": datasets.Value("string"),
239
+ "similarity": datasets.Value("string"),
240
+ }
241
+ )
242
+ elif self.config.name == "swewic":
243
+ features = datasets.Features(
244
+ {
245
+ "sentence_1": datasets.Value("string"),
246
+ "word_1": datasets.Value("string"),
247
+ "sentence_2": datasets.Value("string"),
248
+ "word_2": datasets.Value("string"),
249
+ "same_sense": datasets.Value("string"),
250
+ "start_1": datasets.Value("string"),
251
+ "start_2": datasets.Value("string"),
252
+ "end_1": datasets.Value("string"),
253
+ "end_2": datasets.Value("string"),
254
+ }
255
+ )
256
+ else:
257
+ raise ValueError(f"Subset {self.config.name} does not exist.")
258
+ return datasets.DatasetInfo(
259
+ # This is the description that will appear on the datasets page.
260
+ description=_DESCRIPTION,
261
+ # This defines the different columns of the dataset and their types
262
+ features=features, # Here define them above because they are different between the two configurations
263
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
264
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
265
+ # supervised_keys=("sentence", "label"),
266
+ # Homepage of the dataset for documentation
267
+ homepage=_HOMEPAGE,
268
+ # License for the dataset if available
269
+ license=_LICENSE,
270
+ # Citation for the dataset
271
+ citation=_CITATION,
272
+ )