File size: 2,600 Bytes
df61ac4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# Mini Squad
A simple transformation on the SQuAD dataset for training tiny language models.

## Overview
The Mini Squad dataset is a modified version of the Stanford Question Answering Dataset (SQuAD). It focuses on extracting concise context sentences around each answer, making it suitable for training small-scale language models or fine-tuning lightweight architectures.

### Key Features
- **Reduced Context**: Extracts only the sentence containing the answer, bounded by sentence-ending punctuation (period, question mark, exclamation point, or semicolon).
- **Simplified Format**: Each entry includes `context`, `question`, and `answer`, providing a clean and easy-to-use structure.
- **Preprocessed for Lightweight Models**: Designed to minimize memory and computational requirements for smaller models.

## Dataset Structure
The dataset consists of two splits:
- `train.json`: Training set.
- `validation.json`: Validation set.

Each file is a JSON Lines file, where each line is a dictionary with the following fields:
- `context`: The extracted sentence containing the answer.
- `question`: The question from the original dataset.
- `answer`: The corresponding answer.

### Example Entry
```json
{
    "context": "France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858.",
    "question": "To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?",
    "answer": "Saint Bernadette Soubirous"
}
```

## Usage

### Loading the Dataset
You can load the dataset using the Hugging Face `datasets` library:

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("zakerytclarke/mini_squad")

# Access the splits
train_df = dataset["train"].to_pandas()
validation_df = dataset["validation"].to_pandas()

print(train_df.head())
```

### Applications
- Fine-tuning small language models.
- Training efficient QA systems.
- Use as a benchmark for lightweight NLP architectures.

## File Structure
```
mini-squad/
|— train.json
|— validation.json
```

## Citation
If you use Mini Squad in your research or applications, please cite the original SQuAD dataset:
```
@article{rajpurkar2016squad,
  title={SQuAD: 100,000+ Questions for Machine Comprehension of Text},
  author={Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy},
  journal={arXiv preprint arXiv:1606.05250},
  year={2016}
}
```

## License
The Mini Squad dataset inherits the license of the original SQuAD dataset. Please refer to the [SQuAD license](https://github.com/rajpurkar/SQuAD-explorer/blob/master/LICENSE) for details.