evalitahf commited on
Commit
0cb5e79
·
verified ·
1 Parent(s): bdb8000

feat: initial commit

Browse files
Files changed (3) hide show
  1. README.md +106 -0
  2. multichoice_v1.jsonl +0 -0
  3. multichoice_v2.jsonl +0 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - it
4
+ language_details: it-IT
5
+ license: cc-by-nc-sa-4.0
6
+ task_categories:
7
+ - question-answering
8
+ configs:
9
+ - config_name: default
10
+ data_files:
11
+ - split: test-1
12
+ path: "multichoice_v1.jsonl"
13
+ - split: test-2
14
+ path: "multichoice_v1.jsonl"
15
+ size_categories:
16
+ - n<1K
17
+ ---
18
+
19
+ ### QA4FAQ @ EVALITA 2016
20
+
21
+ Original dataset information available [here](http://qa4faq.github.io/)
22
+
23
+
24
+
25
+ ## Data format
26
+
27
+ The data has been converted to be used as a questin answering task.
28
+ There are two splits, test-1 and test-2, each containing the same data processed in slightly different ways.
29
+
30
+ ### test-1
31
+ The data is in jsonl format, where each line is a json object with the following fields:
32
+ - `id`: a unique identifier for the question
33
+ - `question`: the question
34
+ - `A`, `B`, `C`, `D`: the possible answers to the question
35
+ - `correct_answer`: correct answer to the question ('A', 'B', 'C', 'D')
36
+
37
+ wrong answers are randomly drawn from the other question, answers pairs in the dataset.
38
+
39
+ ### test-2
40
+ The data is in jsonl format, where each line is a json object with the following fields:
41
+ - `id`: a unique identifier for the question
42
+ - `question`: the question
43
+ - `A`, `B`, `C`, `D`: the possible question,answers pairs e.g. (question, answer)
44
+ - `correct_answer`: correct question,answer pair to the question ('A', 'B', 'C', 'D')
45
+
46
+ wrong (q,a) pairs are randomly created by randomy choosing answers from the dataset.
47
+
48
+
49
+
50
+ ## Publications
51
+ ```
52
+ @inproceedings{agirre-etal-2015-semeval,
53
+ title = "{S}em{E}val-2015 Task 2: Semantic Textual Similarity, {E}nglish, {S}panish and Pilot on Interpretability",
54
+ author = "Agirre, Eneko and
55
+ Banea, Carmen and
56
+ Cardie, Claire and
57
+ Cer, Daniel and
58
+ Diab, Mona and
59
+ Gonzalez-Agirre, Aitor and
60
+ Guo, Weiwei and
61
+ Lopez-Gazpio, I{\~n}igo and
62
+ Maritxalar, Montse and
63
+ Mihalcea, Rada and
64
+ Rigau, German and
65
+ Uria, Larraitz and
66
+ Wiebe, Janyce",
67
+ editor = "Nakov, Preslav and
68
+ Zesch, Torsten and
69
+ Cer, Daniel and
70
+ Jurgens, David",
71
+ booktitle = "Proceedings of the 9th International Workshop on Semantic Evaluation ({S}em{E}val 2015)",
72
+ month = jun,
73
+ year = "2015",
74
+ address = "Denver, Colorado",
75
+ publisher = "Association for Computational Linguistics",
76
+ url = "https://aclanthology.org/S15-2045",
77
+ doi = "10.18653/v1/S15-2045",
78
+ pages = "252--263",
79
+ }
80
+ ```
81
+
82
+ ```
83
+ @inproceedings{nakov-etal-2015-semeval,
84
+ title = "{S}em{E}val-2015 Task 3: Answer Selection in Community Question Answering",
85
+ author = "Nakov, Preslav and
86
+ M{\`a}rquez, Llu{\'\i}s and
87
+ Magdy, Walid and
88
+ Moschitti, Alessandro and
89
+ Glass, Jim and
90
+ Randeree, Bilal",
91
+ editor = "Nakov, Preslav and
92
+ Zesch, Torsten and
93
+ Cer, Daniel and
94
+ Jurgens, David",
95
+ booktitle = "Proceedings of the 9th International Workshop on Semantic Evaluation ({S}em{E}val 2015)",
96
+ month = jun,
97
+ year = "2015",
98
+ address = "Denver, Colorado",
99
+ publisher = "Association for Computational Linguistics",
100
+ url = "https://aclanthology.org/S15-2047",
101
+ doi = "10.18653/v1/S15-2047",
102
+ pages = "269--281",
103
+ }
104
+
105
+ ```
106
+
multichoice_v1.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
multichoice_v2.jsonl ADDED
The diff for this file is too large to render. See raw diff