Dataset Viewer
id
stringclasses 5
values | url
stringclasses 5
values | source
stringclasses 2
values | year
stringclasses 5
values | license_type
stringclasses 2
values | hash
stringclasses 5
values | title
stringclasses 5
values | abs
stringclasses 5
values |
---|---|---|---|---|---|---|---|
1809.09600 | https://arxiv.org/pdf/1809.09600.pdf | arxiv | 2018 | cc by 4.0 | 17e5df4028138faf704cd905582e9c96 | HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question
Answering | Existing question answering (QA) datasets fail to train QA systems to perform
complex reasoning and provide explanations for answers. We introduce HotpotQA,
a new dataset with 113k Wikipedia-based question-answer pairs with four key
features: (1) the questions require finding and reasoning over multiple
supporting documents to answer; (2) the questions are diverse and not
constrained to any pre-existing knowledge bases or knowledge schemas; (3) we
provide sentence-level supporting facts required for reasoning, allowing QA
systems to reason with strong supervision and explain the predictions; (4) we
offer a new type of factoid comparison questions to test QA systems' ability to
extract relevant facts and perform necessary comparison. We show that HotpotQA
is challenging for the latest QA systems, and the supporting facts enable
models to improve performance and make explainable predictions. |
2009.07758 | https://arxiv.org/pdf/2009.07758.pdf | arxiv | 2020 | cc by 4.0 | d00711f9fe400b0f9063bc76a682f14f | GLUCOSE: GeneraLized and COntextualized Story Explanations | When humans read or listen, they make implicit commonsense inferences that
frame their understanding of what happened and why. As a step toward AI systems
that can build similar mental models, we introduce GLUCOSE, a large-scale
dataset of implicit commonsense causal knowledge, encoded as causal
mini-theories about the world, each grounded in a narrative context. To
construct GLUCOSE, we drew on cognitive psychology to identify ten dimensions
of causal explanation, focusing on events, states, motivations, and emotions.
Each GLUCOSE entry includes a story-specific causal statement paired with an
inference rule generalized from the statement. This paper details two concrete
contributions. First, we present our platform for effectively crowdsourcing
GLUCOSE data at scale, which uses semi-structured templates to elicit causal
explanations. Using this platform, we collected a total of ~670K specific
statements and general rules that capture implicit commonsense knowledge about
everyday situations. Second, we show that existing knowledge resources and
pretrained language models do not include or readily predict GLUCOSE's rich
inferential content. However, when state-of-the-art neural models are trained
on this knowledge, they can start to make commonsense inferences on unseen
stories that match humans' mental models. |
N19-1423 | https://aclanthology.org/N19-1423.pdf | acl anthology | 2019 | acl license | 9ffe961d898ddbaeafef32cedda9a64b | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | AbstractWe introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement). |
2301.12345 | https://arxiv.org/pdf/2301.12345.pdf | arxiv | 2023 | cc by 4.0 | 2b08a8f62a0c53ec3f13020d0ca1e2d3 | Chemotactic motility-induced phase separation | Collectives of actively-moving particles can spontaneously separate into
dilute and dense phases -- a fascinating phenomenon known as motility-induced
phase separation (MIPS). MIPS is well-studied for randomly-moving particles
with no directional bias. However, many forms of active matter exhibit
collective chemotaxis, directed motion along a chemical gradient that the
constituent particles can generate themselves. Here, using theory and
simulations, we demonstrate that collective chemotaxis strongly competes with
MIPS -- in some cases, arresting or completely suppressing phase separation, or
in other cases, generating fundamentally new dynamic instabilities. We
establish quantitative principles describing this competition, thereby helping
to reveal and clarify the rich physics underlying active matter systems that
perform chemotaxis, ranging from cells to robots. |
P16-1174 | https://aclanthology.org/P16-1174 | acl anthology | 2016 | acl license | 5e6524f4b776a121f6325eb15a00bd7f | A Trainable Spaced Repetition Model for Language Learning |
README.md exists but content is empty.
- Downloads last month
- 13