faq / README.md
evalitahf's picture
Add new partitions for test and development datasets
c78458f verified
metadata
language:
  - it
language_details: it-IT
license: cc-by-nc-sa-4.0
task_categories:
  - question-answering
task_ids:
  - text-classification
configs:
  - config_name: default
    data_files:
      - split: test_1
        path: multichoice_v1_test.jsonl
      - split: dev_1
        path: multichoice_v1_dev.jsonl
      - split: test_2
        path: multichoice_v2_test.jsonl
      - split: dev_2
        path: multichoice_v2_dev.jsonl
size_categories:
  - n<1K

QA4FAQ @ EVALITA 2016

Original dataset information available here

Data format

The data has been converted to be used as a questin answering task. There are two splits, test-1 and test-2, each containing the same data processed in slightly different ways.

test-1

The data is in jsonl format, where each line is a json object with the following fields:

  • id: a unique identifier for the question
  • question: the question
  • A, B, C, D: the possible answers to the question
  • correct_answer: correct answer to the question ('A', 'B', 'C', 'D')

wrong answers are randomly drawn from the other question, answers pairs in the dataset.

test-2

The data is in jsonl format, where each line is a json object with the following fields:

  • id: a unique identifier for the question
  • question: the question
  • A, B, C, D: the possible question,answers pairs e.g. (question, answer)
  • correct_answer: correct question,answer pair to the question ('A', 'B', 'C', 'D')

wrong (q,a) pairs are randomly created by randomy choosing answers from the dataset.

Publications

@inproceedings{agirre-etal-2015-semeval,
    title = "{S}em{E}val-2015 Task 2: Semantic Textual Similarity, {E}nglish, {S}panish and Pilot on Interpretability",
    author = "Agirre, Eneko  and
      Banea, Carmen  and
      Cardie, Claire  and
      Cer, Daniel  and
      Diab, Mona  and
      Gonzalez-Agirre, Aitor  and
      Guo, Weiwei  and
      Lopez-Gazpio, I{\~n}igo  and
      Maritxalar, Montse  and
      Mihalcea, Rada  and
      Rigau, German  and
      Uria, Larraitz  and
      Wiebe, Janyce",
    editor = "Nakov, Preslav  and
      Zesch, Torsten  and
      Cer, Daniel  and
      Jurgens, David",
    booktitle = "Proceedings of the 9th International Workshop on Semantic Evaluation ({S}em{E}val 2015)",
    month = jun,
    year = "2015",
    address = "Denver, Colorado",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/S15-2045",
    doi = "10.18653/v1/S15-2045",
    pages = "252--263",
}
@inproceedings{nakov-etal-2015-semeval,
    title = "{S}em{E}val-2015 Task 3: Answer Selection in Community Question Answering",
    author = "Nakov, Preslav  and
      M{\`a}rquez, Llu{\'\i}s  and
      Magdy, Walid  and
      Moschitti, Alessandro  and
      Glass, Jim  and
      Randeree, Bilal",
    editor = "Nakov, Preslav  and
      Zesch, Torsten  and
      Cer, Daniel  and
      Jurgens, David",
    booktitle = "Proceedings of the 9th International Workshop on Semantic Evaluation ({S}em{E}val 2015)",
    month = jun,
    year = "2015",
    address = "Denver, Colorado",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/S15-2047",
    doi = "10.18653/v1/S15-2047",
    pages = "269--281",
}