TinySQuAD
Motivation
The SQuAD dataset has been instrumental for training question-answering language models due to its structured labeling. However, the lengthy passages can be unwieldy for tiny language models. By leveraging recent advancements in retrieval methods like Retrieval-Augmented Generation (RAG) and embeddings, we can efficiently extract relevant sentences from a passage of text. This dataset provides a streamlined alternative, facilitating the development of compact and effective QnA and reasoning systems.
Dataset Summary
TinySQuAD is a condensed version of the Stanford Question Answering Dataset (SQuAD), designed to simplify context for question-answering tasks. In this dataset, the context has been trimmed to only include the sentence relevant to the question, rather than an entire passage. This makes it more concise and focused for quick, high-accuracy extraction of answers. The dataset contains four fields: title
, context
, question
, and answer
.
Dataset Structure
Each data point in TinySQuAD consists of the following fields:
- title: The title or subject of the article or passage.
- context: A single sentence containing relevant information for answering the question.
- question: A natural language question that can be answered using the provided context.
- answer: The exact answer to the question, extracted from the context sentence.
Supported Tasks
- Question Answering (QA): The dataset is designed for extractive QA tasks, where the goal is to find the correct answer to a question within a given context.
Languages
The dataset is in English.
Dataset Creation
TinySQuAD was created by processing the original SQuAD dataset to reduce the length of the context, selecting only the sentence most relevant to answering each question. This process simplifies the task and reduces unnecessary information, making it ideal for small-scale QA models or tasks where concise contexts are preferred.
Train/Validation Splits
It is important to note that TinySQuAD does not match the train/validation splits of the original SQuAD dataset. During the creation of TinySQuAD, the examples were processed and may have been shuffled or reorganized, resulting in a dataset that does not directly align with the original partitioning. Therefore, researchers and practitioners should take care when comparing results on TinySQuAD with models trained on the original SQuAD dataset, as the evaluation may not be directly comparable.
Uses and Applications
- Training and Evaluating QA Systems: TinySQuAD is useful for training question-answering models that need focused contexts.
- Low-resource Environments: The shorter context length makes this dataset suitable for low-computation environments like edge devices or browser-based applications.
Citation
If you use TinySQuAD in your research or projects, please cite it as follows:
@dataset{tinysquad,
title={TinySQuAD: A Condensed Question-Answering Dataset},
author={Your Name},
year={2024},
url={your-dataset-url}
}
License
The dataset inherits the same licensing as the original SQuAD dataset (CC BY-SA 4.0).