id
stringlengths 2
115
| private
bool 1
class | tags
sequence | description
stringlengths 0
5.93k
⌀ | downloads
int64 0
1.14M
| likes
int64 0
1.79k
|
---|---|---|---|---|---|
acronym_identification | false | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"acronym-identification",
"arxiv:2010.14678"
] | Acronym identification training and development sets for the acronym identification task at SDU@AAAI-21. | 5,973 | 11 |
ade_corpus_v2 | false | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:coreference-resolution",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown"
] | ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.
DRUG-AE.rel provides relations between drugs and adverse effects.
DRUG-DOSE.rel provides relations between drugs and dosages.
ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects. | 3,893 | 14 |
adversarial_qa | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2002.00293",
"arxiv:1606.05250"
] | AdversarialQA is a Reading Comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles using an adversarial model-in-the-loop.
We use three different models; BiDAF (Seo et al., 2016), BERT-Large (Devlin et al., 2018), and RoBERTa-Large (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.
The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. | 38,194 | 22 |
aeslc | false | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"aspect-based-summarization",
"conversations-summarization",
"multi-document-summarization",
"email-headline-generation",
"arxiv:1906.03497"
] | A collection of email messages of employees in the Enron Corporation.
There are two features:
- email_body: email body text.
- subject_line: email subject text. | 1,259 | 3 |
afrikaans_ner_corpus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:af",
"license:other"
] | Named entity annotated data from the NCHLT Text Resource Development: Phase II Project, annotated with PERSON, LOCATION, ORGANISATION and MISCELLANEOUS tags. | 356 | 3 |
ag_news | false | [
"task_categories:text-classification",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown"
] | AG is a collection of more than 1 million news articles. News articles have been
gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of
activity. ComeToMyHead is an academic news search engine which has been running
since July, 2004. The dataset is provided by the academic comunity for research
purposes in data mining (clustering, classification, etc), information retrieval
(ranking, search, etc), xml, data compression, data streaming, and any other
non-commercial activity. For more information, please refer to the link
http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .
The AG's news topic classification dataset is constructed by Xiang Zhang
([email protected]) from the dataset above. It is used as a text
classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann
LeCun. Character-level Convolutional Networks for Text Classification. Advances
in Neural Information Processing Systems 28 (NIPS 2015). | 28,764 | 49 |
ai2_arc | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:multiple-choice-qa",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0"
] | A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in
advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains
only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also
including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community. | 28,200 | 6 |
air_dialogue | false | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0"
] | AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. | 466 | 1 |
ajgt_twitter_ar | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:unknown"
] | Arabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect. | 358 | 2 |
allegro_reviews | false | [
"task_categories:text-classification",
"task_ids:sentiment-scoring",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-sa-4.0"
] | Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted
from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale
from one (negative review) to five (positive review).
We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden.
You can evaluate your model using the online evaluation tool available on klejbenchmark.com. | 279 | 0 |
allocine | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:fr",
"license:mit"
] | Allocine Dataset: A Large-Scale French Movie Reviews Dataset.
This is a dataset for binary sentiment classification, made of user reviews scraped from Allocine.fr.
It contains 100k positive and 100k negative reviews divided into 3 balanced splits: train (160k reviews), val (20k) and test (20k). | 542 | 5 |
alt | false | [
"task_categories:translation",
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:bn",
"language:en",
"language:fil",
"language:hi",
"language:id",
"language:ja",
"language:km",
"language:lo",
"language:ms",
"language:my",
"language:th",
"language:vi",
"language:zh",
"license:cc-by-4.0"
] | The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ALT now has 13 languages: Bengali, English, Filipino, Hindi, Bahasa Indonesia, Japanese, Khmer, Lao, Malay, Myanmar (Burmese), Thai, Vietnamese, Chinese (Simplified Chinese). | 1,266 | 5 |
amazon_polarity | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1509.01626"
] | The Amazon reviews dataset consists of reviews from amazon.
The data span a period of 18 years, including ~35 million reviews up to March 2013.
Reviews include product and user information, ratings, and a plaintext review. | 20,483 | 23 |
amazon_reviews_multi | false | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:ja",
"language:zh",
"license:other",
"arxiv:2010.02573"
] | We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.
For each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long.
Note that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from amazon.de are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language. | 11,555 | 41 |
amazon_us_reviews | false | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"task_ids:topic-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:other"
] | Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the Amazon.com website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews.
Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters).
Each Dataset contains the following columns:
- marketplace: 2 letter country code of the marketplace where the review was written.
- customer_id: Random identifier that can be used to aggregate reviews written by a single author.
- review_id: The unique ID of the review.
- product_id: The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product_id.
- product_parent: Random identifier that can be used to aggregate reviews for the same product.
- product_title: Title of the product.
- product_category: Broad product category that can be used to group reviews (also used to group the dataset into coherent parts).
- star_rating: The 1-5 star rating of the review.
- helpful_votes: Number of helpful votes.
- total_votes: Number of total votes the review received.
- vine: Review was written as part of the Vine program.
- verified_purchase: The review is on a verified purchase.
- review_headline: The title of the review.
- review_body: The review text.
- review_date: The date the review was written. | 15,051 | 16 |
ambig_qa | false | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|natural_questions",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:2004.10645"
] | AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with
14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.
We provide two distributions of our new dataset AmbigNQ: a full version with all annotation metadata and a light version with only inputs and outputs. | 1,113 | 1 |
americas_nli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|xnli",
"language:ay",
"language:bzd",
"language:cni",
"language:gn",
"language:hch",
"language:nah",
"language:oto",
"language:qu",
"language:shp",
"language:tar",
"license:unknown",
"arxiv:2104.08726"
] | AmericasNLI is an extension of XNLI (Conneau et al., 2018) – a natural language inference (NLI) dataset covering 15 high-resource languages – to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels). | 2,774 | 0 |
ami | false | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0"
] | The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals
synchronized to a common timeline. These include close-talking and far-field microphones, individual and
room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,
the participants also have unsynchronized pens available to them that record what is written. The meetings
were recorded in English using three different rooms with different acoustic properties, and include mostly
non-native speakers. \n | 799 | 8 |
amttl | false | [
"task_categories:token-classification",
"task_ids:parsing",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:mit"
] | Chinese word segmentation (CWS) trained from open source corpus faces dramatic performance drop
when dealing with domain text, especially for a domain with lots of special terms and diverse
writing styles, such as the biomedical domain. However, building domain-specific CWS requires
extremely high annotation cost. In this paper, we propose an approach by exploiting domain-invariant
knowledge from high resource to low resource domains. Extensive experiments show that our mode
achieves consistently higher accuracy than the single-task CWS and other transfer learning
baselines, especially when there is a large disparity between source and target domains.
This dataset is the accompanied medical Chinese word segmentation (CWS) dataset.
The tags are in BIES scheme.
For more details see https://www.aclweb.org/anthology/C18-1307/ | 276 | 0 |
anli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"source_datasets:extended|hotpot_qa",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:1910.14599"
] | The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,
The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.
ANLI is much more difficult than its predecessors including SNLI and MNLI.
It contains three rounds. Each round has train/dev/test splits. | 51,185 | 15 |
app_reviews | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:sentiment-scoring",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown"
] | It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches) | 5,794 | 6 |
aqua_rat | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:1705.04146"
] | A large-scale dataset consisting of approximately 100,000 algebraic word problems.
The solution to each question is explained step-by-step using natural language.
This data is used to train a program generation model that learns to generate the explanation,
while generating the program that solves the question. | 1,952 | 2 |
aquamuse | false | [
"task_categories:other",
"task_categories:question-answering",
"task_categories:text2text-generation",
"task_ids:abstractive-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|natural_questions",
"source_datasets:extended|other-Common-Crawl",
"source_datasets:original",
"language:en",
"license:unknown",
"query-based-multi-document-summarization",
"arxiv:2010.12694"
] | AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl) | 256 | 4 |
ar_cov19 | false | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"data-mining",
"arxiv:2004.05861"
] | ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others | 273 | 1 |
ar_res_reviews | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:unknown"
] | Dataset of 8364 restaurant reviews scrapped from qaym.com in Arabic for sentiment analysis | 291 | 3 |
ar_sarcasm | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-semeval_2017",
"source_datasets:extended|other-astd",
"language:ar",
"license:mit",
"sarcasm-detection"
] | ArSarcasm is a new Arabic sarcasm detection dataset.
The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD)
and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic. | 323 | 4 |
arabic_billion_words | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ar",
"license:unknown",
"arxiv:1611.04033"
] | Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles.
It contains over a billion and a half words in total, out of which, there are about three million unique words.
The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256.
Also it was marked with two mark-up languages, namely: SGML, and XML. | 1,568 | 6 |
arabic_pos_dialect | false | [
"task_categories:token-classification",
"task_ids:part-of-speech",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:extended",
"language:ar",
"license:apache-2.0",
"arxiv:1708.05891"
] | The Dialectal Arabic Datasets contain four dialects of Arabic, Etyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR). Each dataset consists of a set of 350 manually segmented and POS tagged tweets. | 683 | 2 |
arabic_speech_corpus | false | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:cc-by-4.0"
] | This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton.
The corpus was recorded in south Levantine Arabic
(Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.
Note that in order to limit the required storage for preparing this dataset, the audio
is stored in the .flac format and is not converted to a float32 array. To convert, the audio
file to a float32 array, please make use of the `.map()` function as follows:
```python
import soundfile as sf
def map_to_array(batch):
speech_array, _ = sf.read(batch["file"])
batch["speech"] = speech_array
return batch
dataset = dataset.map(map_to_array, remove_columns=["file"])
``` | 474 | 12 |
arcd | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:mit"
] | Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles. | 329 | 1 |
arsentd_lev | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:apc",
"language:ajp",
"license:other",
"arxiv:1906.01830"
] | The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria. | 275 | 3 |
art | false | [
"task_categories:multiple-choice",
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown",
"abductive-natural-language-inference",
"arxiv:1908.05739"
] | the Abductive Natural Language Inference Dataset from AI2 | 1,128 | 3 |
arxiv_dataset | false | [
"task_categories:translation",
"task_categories:summarization",
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:fact-checking-retrieval",
"task_ids:text-simplification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"arxiv:1905.00075"
] | A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces. | 486 | 19 |
ascent_kb | false | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"knowledge-base",
"arxiv:2011.00905"
] | This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline (https://ascent.mpi-inf.mpg.de/). | 409 | 2 |
aslg_pc12 | false | [
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ase",
"language:en",
"license:cc-by-nc-4.0"
] | A large synthetic collection of parallel English and ASL-Gloss texts.
There are two string features: text, and gloss. | 380 | 1 |
asnq | false | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:extended|natural_questions",
"language:en",
"license:cc-by-nc-sa-3.0",
"arxiv:1911.04118"
] | ASNQ is a dataset for answer sentence selection derived from
Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).
Each example contains a question, candidate sentence, label indicating whether or not
the sentence answers the question, and two additional features --
sentence_in_long_answer and short_answer_in_sentence indicating whether ot not the
candidate sentence is contained in the long_answer and if the short_answer is in the candidate sentence.
For more details please see
https://arxiv.org/pdf/1911.04118.pdf
and
https://research.google/pubs/pub47761/ | 576 | 1 |
asset | false | [
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_ids:text-simplification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"source_datasets:extended|other-turkcorpus",
"language:en",
"license:cc-by-sa-4.0",
"simplification-evaluation"
] | ASSET is a dataset for evaluating Sentence Simplification systems with multiple rewriting transformations,
as described in "ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations".
The corpus is composed of 2000 validation and 359 test original sentences that were each simplified 10 times by different annotators.
The corpus also contains human judgments of meaning preservation, fluency and simplicity for the outputs of several automatic text simplification systems. | 2,465 | 6 |
assin | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pt",
"license:unknown"
] | The ASSIN (Avaliação de Similaridade Semântica e INferência textual) corpus is a corpus annotated with pairs of sentences written in
Portuguese that is suitable for the exploration of textual entailment and paraphrasing classifiers. The corpus contains pairs of sentences
extracted from news articles written in European Portuguese (EP) and Brazilian Portuguese (BP), obtained from Google News Portugal
and Brazil, respectively. To create the corpus, the authors started by collecting a set of news articles describing the
same event (one news article from Google News Portugal and another from Google News Brazil) from Google News.
Then, they employed Latent Dirichlet Allocation (LDA) models to retrieve pairs of similar sentences between sets of news
articles that were grouped together around the same topic. For that, two LDA models were trained (for EP and for BP)
on external and large-scale collections of unannotated news articles from Portuguese and Brazilian news providers, respectively.
Then, the authors defined a lower and upper threshold for the sentence similarity score of the retrieved pairs of sentences,
taking into account that high similarity scores correspond to sentences that contain almost the same content (paraphrase candidates),
and low similarity scores correspond to sentences that are very different in content from each other (no-relation candidates).
From the collection of pairs of sentences obtained at this stage, the authors performed some manual grammatical corrections
and discarded some of the pairs wrongly retrieved. Furthermore, from a preliminary analysis made to the retrieved sentence pairs
the authors noticed that the number of contradictions retrieved during the previous stage was very low. Additionally, they also
noticed that event though paraphrases are not very frequent, they occur with some frequency in news articles. Consequently,
in contrast with the majority of the currently available corpora for other languages, which consider as labels “neutral”, “entailment”
and “contradiction” for the task of RTE, the authors of the ASSIN corpus decided to use as labels “none”, “entailment” and “paraphrase”.
Finally, the manual annotation of pairs of sentences was performed by human annotators. At least four annotators were randomly
selected to annotate each pair of sentences, which is done in two steps: (i) assigning a semantic similarity label (a score between 1 and 5,
from unrelated to very similar); and (ii) providing an entailment label (one sentence entails the other, sentences are paraphrases,
or no relation). Sentence pairs where at least three annotators do not agree on the entailment label were considered controversial
and thus discarded from the gold standard annotations. The full dataset has 10,000 sentence pairs, half of which in Brazilian Portuguese
and half in European Portuguese. Either language variant has 2,500 pairs for training, 500 for validation and 2,000 for testing. | 612 | 5 |
assin2 | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"license:unknown"
] | The ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1.
The training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese,
annotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment
classes are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same
annotation. All data were manually annotated. | 1,044 | 4 |
atomic | false | [
"task_categories:text2text-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"common-sense-if-then-reasoning"
] | This dataset provides the template sentences and
relationships defined in the ATOMIC common sense dataset. There are
three splits - train, test, and dev.
From the authors.
Disclaimer/Content warning: the events in atomic have been
automatically extracted from blogs, stories and books written at
various times. The events might depict violent or problematic actions,
which we left in the corpus for the sake of learning the (probably
negative but still important) commonsense implications associated with
the events. We removed a small set of truly out-dated events, but
might have missed some so please email us ([email protected]) if
you have any concerns. | 286 | 4 |
autshumato | false | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:tn",
"language:ts",
"language:zu",
"license:cc-by-2.5"
] | Multilingual information access is stipulated in the South African constitution. In practise, this
is hampered by a lack of resources and capacity to perform the large volumes of translation
work required to realise multilingual information access. One of the aims of the Autshumato
project is to develop machine translation systems for three South African languages pairs. | 936 | 1 |
facebook/babi_qa | false | [
"task_categories:question-answering",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0",
"chained-qa",
"arxiv:1502.05698",
"arxiv:1511.06931"
] | The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading
comprehension via question answering. Our tasks measure understanding
in several ways: whether a system is able to answer questions via chaining facts,
simple induction, deduction and many more. The tasks are designed to be prerequisites
for any system that aims to be capable of conversing with a human.
The aim is to classify these tasks into skill sets,so that researchers
can identify (and then rectify)the failings of their systems. | 641 | 1 |
banking77 | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2003.04807"
] | BANKING77 dataset provides a very fine-grained set of intents in a banking domain.
It comprises 13,083 customer service queries labeled with 77 intents.
It focuses on fine-grained single-domain intent detection. | 4,804 | 17 |
bbaw_egyptian | false | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:extended|wikipedia",
"language:de",
"language:egy",
"language:en",
"license:cc-by-4.0"
] | This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation
as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian
Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by
the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache". | 282 | 3 |
bbc_hindi_nli | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|bbc__hindi_news_classification",
"language:hi",
"license:mit"
] | This dataset is used to train models for Natural Language Inference Tasks in Low-Resource Languages like Hindi. | 309 | 0 |
bc2gm_corpus | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop.
In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions.
A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721.
Here we present brief descriptions of all the methods used and a statistical analysis of the results.
We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible,
and furthermore that the best result makes use of the lowest scoring submissions.
For more details, see: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/
The original dataset can be downloaded from: https://biocreative.bioinformatics.udel.edu/resources/corpora/biocreative-ii-corpus/
This dataset has been converted to CoNLL format for NER using the following tool: https://github.com/spyysalo/standoff2conll | 539 | 2 |
beans | false | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit"
] | Beans is a dataset of images of beans taken in the field using smartphone
cameras. It consists of 3 classes: 2 disease classes and the healthy class.
Diseases depicted include Angular Leaf Spot and Bean Rust. Data was annotated
by experts from the National Crops Resources Research Institute (NaCRRI) in
Uganda and collected by the Makerere AI research lab. | 8,390 | 8 |
best2009 | false | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:th",
"license:cc-by-nc-sa-3.0",
"word-tokenization"
] | `best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by
[NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for
[BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10).
The test set answers are not provided publicly. | 386 | 0 |
bianet | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:ku",
"language:tr",
"license:unknown"
] | A parallel news corpus in Turkish, Kurdish and English.
Bianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper.
3 languages, 3 bitexts
total number of files: 6
total number of tokens: 2.25M
total number of sentence fragments: 0.14M | 539 | 0 |
bible_para | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:acu",
"language:af",
"language:agr",
"language:ake",
"language:am",
"language:amu",
"language:ar",
"language:bg",
"language:bsn",
"language:cak",
"language:ceb",
"language:ch",
"language:chq",
"language:chr",
"language:cjp",
"language:cni",
"language:cop",
"language:crp",
"language:cs",
"language:da",
"language:de",
"language:dik",
"language:dje",
"language:djk",
"language:dop",
"language:ee",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fi",
"language:fr",
"language:gbi",
"language:gd",
"language:gu",
"language:gv",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:jak",
"language:jiv",
"language:kab",
"language:kbh",
"language:kek",
"language:kn",
"language:ko",
"language:la",
"language:lt",
"language:lv",
"language:mam",
"language:mi",
"language:ml",
"language:mr",
"language:my",
"language:ne",
"language:nhg",
"language:nl",
"language:no",
"language:ojb",
"language:pck",
"language:pes",
"language:pl",
"language:plt",
"language:pot",
"language:ppk",
"language:pt",
"language:quc",
"language:quw",
"language:ro",
"language:rom",
"language:ru",
"language:shi",
"language:sk",
"language:sl",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:ss",
"language:sv",
"language:syr",
"language:te",
"language:th",
"language:tl",
"language:tmh",
"language:tr",
"language:uk",
"language:usp",
"language:vi",
"language:wal",
"language:wo",
"language:xh",
"language:zh",
"language:zu",
"license:cc0-1.0"
] | This is a multilingual parallel corpus created from translations of the Bible compiled by Christos Christodoulopoulos and Mark Steedman.
102 languages, 5,148 bitexts
total number of files: 107
total number of tokens: 56.43M
total number of sentence fragments: 2.84M | 1,007 | 6 |
big_patent | false | [
"task_categories:summarization",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"patent-summarization",
"arxiv:1906.03741"
] | BIGPATENT, consisting of 1.3 million records of U.S. patent documents
along with human written abstractive summaries.
Each US patent application is filed under a Cooperative Patent Classification
(CPC) code. There are nine such classification categories:
A (Human Necessities), B (Performing Operations; Transporting),
C (Chemistry; Metallurgy), D (Textiles; Paper), E (Fixed Constructions),
F (Mechanical Engineering; Lightning; Heating; Weapons; Blasting),
G (Physics), H (Electricity), and
Y (General tagging of new or cross-sectional technology)
There are two features:
- description: detailed description of patent.
- abstract: Patent abastract. | 2,937 | 15 |
billsum | false | [
"task_categories:summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"bills-summarization",
"arxiv:1910.00523"
] | BillSum, summarization of US Congressional and California state bills.
There are several features:
- text: bill text.
- summary: summary of the bills.
- title: title of the bills.
features for us bills. ca bills does not have.
- text_len: number of chars in text.
- sum_len: number of chars in summary. | 4,292 | 16 |
bing_coronavirus_query_set | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:other"
] | This dataset was curated from the Bing search logs (desktop users only) over the period of Jan 1st, 2020 – (Current Month - 1). Only searches that were issued many times by multiple users were included. The dataset includes queries from all over the world that had an intent related to the Coronavirus or Covid-19. In some cases this intent is explicit in the query itself (e.g., “Coronavirus updates Seattle”), in other cases it is implicit , e.g. “Shelter in place”. The implicit intent of search queries (e.g., “Toilet paper”) was extracted using random walks on the click graph as outlined in this paper by Microsoft Research. All personal data were removed. | 840 | 0 |
biomrc | false | [
"language:en"
] | We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard. | 1,130 | 3 |
biosses | false | [
"task_categories:text-classification",
"task_ids:text-scoring",
"task_ids:semantic-similarity-scoring",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:gpl-3.0"
] | BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). | 1,246 | 3 |
blbooks | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:other",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"language:nl",
"license:cc0-1.0",
"digital-humanities-research"
] | A dataset comprising of text created by OCR from the 49,455 digitised books, equating to 65,227 volumes (25+ million pages), published between c. 1510 - c. 1900.
The books cover a wide range of subject areas including philosophy, history, poetry and literature. | 679 | 4 |
blbooksgenre | false | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:topic-classification",
"task_ids:multi-label-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"language:en",
"language:fr",
"language:nl",
"license:cc0-1.0"
] | This dataset contains metadata for resources belonging to the British Library’s digitised printed books (18th-19th century) collection (bl.uk/collection-guides/digitised-printed-books).
This metadata has been extracted from British Library catalogue records.
The metadata held within our main catalogue is updated regularly.
This metadata dataset should be considered a snapshot of this metadata. | 1,669 | 3 |
blended_skill_talk | false | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2004.08449"
] | A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge. | 1,607 | 26 |
blimp | false | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0"
] | BLiMP is a challenge set for evaluating what language models (LMs) know about
major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
containing 1000 minimal pairs isolating specific contrasts in syntax,
morphology, or semantics. The data is automatically generated according to
expert-crafted grammars. | 18,657 | 27 |
blog_authorship_corpus | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person.
Each blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.)
All bloggers included in the corpus fall into one of three age groups:
- 8240 "10s" blogs (ages 13-17),
- 8086 "20s" blogs (ages 23-27),
- 2994 "30s" blogs (ages 33-47).
For each age group there are an equal number of male and female bloggers.
Each blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink.
The corpus may be freely used for non-commercial research purposes. | 602 | 4 |
bn_hate_speech | false | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:bn",
"license:mit",
"hate-speech-topic-classification",
"arxiv:2004.07807"
] | The Bengali Hate Speech Dataset is a collection of Bengali articles collected from Bengali news articles,
news dump of Bengali TV channels, books, blogs, and social media. Emphasis was placed on Facebook pages and
newspaper sources because they attract close to 50 million followers and is a common source of opinions
and hate speech. The raw text corpus contains 250 million articles and the full dataset is being prepared
for release. This is a subset of the full dataset.
This dataset was prepared for hate-speech text classification benchmark on Bengali, an under-resourced language. | 294 | 1 |
bnl_newspapers | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"language:da",
"language:de",
"language:fi",
"language:fr",
"language:lb",
"language:nl",
"language:pt",
"license:cc0-1.0"
] | Digitised historic newspapers from the Bibliothèque nationale (BnL) - the National Library of Luxembourg. | 276 | 0 |
bookcorpus | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2105.05241"
] | Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets. \ | 10,541 | 80 |
bookcorpusopen | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2105.05241"
] | Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas. | 925 | 13 |
boolq | false | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0"
] | BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally
occurring ---they are generated in unprompted and unconstrained settings.
Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context.
The text-pair classification setup is similar to existing natural language inference tasks. | 4,429 | 10 |
bprec | false | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pl",
"license:unknown"
] | Dataset consisting of Polish language texts annotated to recognize brand-product relations. | 805 | 0 |
break_data | false | [
"task_categories:text2text-generation",
"task_ids:open-domain-abstractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations
(QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases.
This repository contains the Break dataset along with information on the exact data format. | 1,203 | 0 |
brwac | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:pt",
"license:unknown"
] | The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework,
which was made public for research purposes. The current corpus version, released in January 2017, is composed by
3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available
solely for academic research purposes, and you agreed not to use it for any commercial applications.
Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC | 299 | 5 |
bsd_ja_en | false | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"language:ja",
"license:cc-by-nc-sa-4.0",
"business-conversations-translation"
] | This is the Business Scene Dialogue (BSD) dataset,
a Japanese-English parallel corpus containing written conversations
in various business scenarios.
The dataset was constructed in 3 steps:
1) selecting business scenes,
2) writing monolingual conversation scenarios according to the selected scenes, and
3) translating the scenarios into the other language.
Half of the monolingual scenarios were written in Japanese
and the other half were written in English.
Fields:
- id: dialogue identifier
- no: sentence pair number within a dialogue
- en_speaker: speaker name in English
- ja_speaker: speaker name in Japanese
- en_sentence: sentence in English
- ja_sentence: sentence in Japanese
- original_language: language in which monolingual scenario was written
- tag: scenario
- title: scenario title | 278 | 1 |
bswac | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:bs",
"license:cc-by-sa-3.0"
] | The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian).
Version 1.0 of this corpus is described in http://www.aclweb.org/anthology/W14-0405. Version 1.1 contains newer and better linguistic annotations. | 272 | 0 |
c3 | false | [
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:other",
"arxiv:1904.09679"
] | Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.
We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. | 467 | 2 |
c4 | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:odc-by",
"arxiv:1910.10683"
] | A colossal, cleaned version of Common Crawl's web crawl corpus.
Based on Common Crawl dataset: "https://commoncrawl.org".
This is the processed version of Google's C4 dataset by AllenAI. | 45,904 | 67 |
cail2018 | false | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:zh",
"license:unknown",
"judgement-prediction",
"arxiv:1807.02478"
] | In this paper, we introduce Chinese AI and Law challenge dataset (CAIL2018),
the first large-scale Chinese legal dataset for judgment prediction. CAIL contains more than 2.6 million
criminal cases published by the Supreme People's Court of China, which are several times larger than other
datasets in existing works on judgment prediction. Moreover, the annotations of judgment results are more
detailed and rich. It consists of applicable law articles, charges, and prison terms, which are expected
to be inferred according to the fact descriptions of cases. For comparison, we implement several conventional
text classification baselines for judgment prediction and experimental results show that it is still a
challenge for current models to predict the judgment results of legal cases, especially on prison terms.
To help the researchers make improvements on legal judgment prediction. | 337 | 3 |
caner | false | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ar",
"license:unknown"
] | Classical Arabic Named Entity Recognition corpus as a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. | 357 | 1 |
capes | false | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"language:pt",
"license:unknown",
"dissertation-abstracts-translation",
"theses-translation"
] | A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm. | 275 | 1 |
casino | false | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:dialogue-modeling",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0"
] | We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness. | 303 | 1 |
catalonia_independence | false | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ca",
"language:es",
"license:cc-by-nc-sa-4.0",
"stance-detection"
] | This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia. | 421 | 1 |
cats_vs_dogs | false | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown"
] | null | 885 | 8 |
cawac | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:ca",
"license:cc-by-sa-3.0"
] | caWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013. | 275 | 0 |
cbt | false | [
"task_categories:other",
"task_categories:question-answering",
"task_ids:multiple-choice-qa",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:gfdl",
"arxiv:1511.02301"
] | The Children’s Book Test (CBT) is designed to measure directly
how well language models can exploit wider linguistic context.
The CBT is built from books that are freely available. | 4,089 | 7 |
cc100 | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10M<n<100M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:ff",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:ht",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lg",
"language:li",
"language:ln",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:ns",
"language:om",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:ro",
"language:ru",
"language:sa",
"language:sc",
"language:sd",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:ss",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tn",
"language:tr",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wo",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"license:unknown"
] | This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. No claims of intellectual property are made on the work of preparation of the corpus. | 3,397 | 22 |
cc_news | false | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:unknown"
] | CC-News containing news articles from news sites all over the world The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has 708241 articles. It represents a small portion of English language subset of the CC-News dataset created using news-please(Hamborg et al.,2017) to collect and extract English language portion of CC-News. | 3,067 | 18 |
ccaligned_multilingual | false | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:af",
"language:ak",
"language:am",
"language:ar",
"language:as",
"language:ay",
"language:az",
"language:be",
"language:bg",
"language:bm",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:ceb",
"language:ckb",
"language:cs",
"language:cy",
"language:de",
"language:dv",
"language:el",
"language:eo",
"language:es",
"language:fa",
"language:ff",
"language:fi",
"language:fo",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gn",
"language:gu",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:ka",
"language:kac",
"language:kg",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:la",
"language:lg",
"language:li",
"language:ln",
"language:lo",
"language:lt",
"language:lv",
"language:mg",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:ne",
"language:nl",
"language:no",
"language:nso",
"language:ny",
"language:om",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sc",
"language:sd",
"language:se",
"language:shn",
"language:si",
"language:sk",
"language:sl",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:ss",
"language:st",
"language:su",
"language:sv",
"language:sw",
"language:syc",
"language:szl",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tl",
"language:tn",
"language:tr",
"language:ts",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vi",
"language:war",
"language:wo",
"language:xh",
"language:yi",
"language:yo",
"language:zgh",
"language:zh",
"language:zu",
"language:zza",
"license:unknown"
] | CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). | 932 | 2 |
cdsc | false | [
"task_categories:other",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:cc-by-nc-sa-4.0",
"sentences entailment and relatedness"
] | Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. | 404 | 0 |
cdt | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:pl",
"license:bsd-3-clause"
] | The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content. | 272 | 0 |
cedr | false | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ru",
"license:apache-2.0",
"emotion-classification"
] | This new dataset is designed to solve emotion recognition task for text data in Russian. The Corpus for Emotions Detecting in
Russian-language text sentences of different social sources (CEDR) contains 9410 sentences in Russian labeled for 5 emotion
categories. The data collected from different sources: posts of the LiveJournal social network, texts of the online news
agency Lenta.ru, and Twitter microblog posts. There are two variants of the corpus: main and enriched. The enriched variant
is include tokenization and lemmatization. Dataset with predefined train/test splits. | 692 | 3 |
cfq | false | [
"task_categories:question-answering",
"task_categories:other",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"compositionality",
"arxiv:1912.09713"
] | The CFQ dataset (and it's splits) for measuring compositional generalization.
See https://arxiv.org/abs/1912.09713.pdf for background.
Example usage:
data = datasets.load_dataset('cfq/mcd1') | 1,225 | 1 |
chr_en | false | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_categories:translation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"annotations_creators:found",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:chr",
"language:en",
"license:other",
"arxiv:2010.04791"
] | ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.
ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.
ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning. | 672 | 2 |
cifar10 | false | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown"
] | The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images
per class. There are 50000 training images and 10000 test images. | 23,528 | 14 |
cifar100 | false | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-80-Million-Tiny-Images",
"language:en",
"license:unknown"
] | The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images
per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.
There are two labels per image - fine label (actual class) and coarse label (superclass). | 3,516 | 6 |
circa | false | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"question-answer-pair-classification",
"arxiv:2010.03450"
] | The Circa (meaning ‘approximately’) dataset aims to help machine learning systems
to solve the problem of interpreting indirect answers to polar questions.
The dataset contains pairs of yes/no questions and indirect answers, together with
annotations for the interpretation of the answer. The data is collected in 10
different social conversational situations (eg. food preferences of a friend).
NOTE: There might be missing labels in the dataset and we have replaced them with -1.
The original dataset contains no train/dev/test splits. | 1,314 | 1 |
civil_comments | false | [
"language:en",
"arxiv:1903.04561"
] | The comments in this dataset come from an archive of the Civil Comments
platform, a commenting plugin for independent news sites. These public comments
were created from 2015 - 2017 and appeared on approximately 50 English-language
news sites across the world. When Civil Comments shut down in 2017, they chose
to make the public comments available in a lasting open archive to enable future
research. The original data, published on figshare, includes the public comment
text, some associated metadata such as article IDs, timestamps and
commenter-generated "civility" labels, but does not include user ids. Jigsaw
extended this dataset by adding additional labels for toxicity and identity
mentions. This data set is an exact replica of the data released for the
Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This
dataset is released under CC0, as is the underlying comment text. | 892 | 1 |
clickbait_news_bg | false | [
"task_categories:text-classification",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:bg",
"license:unknown"
] | Dataset with clickbait and fake news in Bulgarian. Introduced for the Hack the Fake News 2017. | 280 | 0 |
climate_fever | false | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:text-scoring",
"task_ids:fact-checking",
"task_ids:fact-checking-retrieval",
"task_ids:semantic-similarity-scoring",
"task_ids:multi-input-text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|wikipedia",
"source_datasets:original",
"language:en",
"license:unknown",
"arxiv:2012.00614"
] | A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. | 1,859 | 4 |
clinc_oos | false | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-3.0"
] | This dataset is for evaluating the performance of intent classification systems in the
presence of "out-of-scope" queries. By "out-of-scope", we mean queries that do not fall
into any of the system-supported intent classes. Most datasets include only data that is
"in-scope". Our dataset includes both in-scope and out-of-scope data. You might also know
the term "out-of-scope" by other terms, including "out-of-domain" or "out-of-distribution". | 3,086 | 9 |
clue | false | [
"task_categories:text-classification",
"task_categories:multiple-choice",
"task_ids:topic-classification",
"task_ids:semantic-similarity-scoring",
"task_ids:natural-language-inference",
"task_ids:multiple-choice-qa",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:zh",
"license:unknown",
"coreference-nli",
"qa-nli"
] | CLUE, A Chinese Language Understanding Evaluation Benchmark
(https://www.cluebenchmarks.com/) is a collection of resources for training,
evaluating, and analyzing Chinese language understanding systems. | 4,366 | 17 |
cmrc2018 | false | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:cc-by-sa-4.0"
] | A Span-Extraction dataset for Chinese machine reading comprehension to add language
diversities in this area. The dataset is composed by near 20,000 real questions annotated
on Wikipedia paragraphs by human experts. We also annotated a challenge set which
contains the questions that need comprehensive understanding and multi-sentence
inference throughout the context. | 851 | 6 |
cmu_hinglish_dog | false | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:hi",
"license:cc-by-sa-3.0",
"license:gfdl",
"arxiv:1809.07358"
] | This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English only versions. Can be used for Translating between the two. | 321 | 1 |
cnn_dailymail | false | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0"
] | CNN/DailyMail non-anonymized summarization dataset.
There are two features:
- article: text of news article, used as the document to be summarized
- highlights: joined text of highlights with <s> and </s> around each
highlight, which is the target summary | 62,766 | 50 |
coached_conv_pref | false | [
"task_categories:other",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:token-classification",
"task_ids:dialogue-modeling",
"task_ids:parsing",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"Conversational Recommendation"
] | A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing
movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers,
where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits
the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The
assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her
preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with
entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of
entities. | 273 | 2 |