File size: 2,837 Bytes
7cfd0d1 935a844 7cfd0d1 eec59cf de2f0bf 7cfd0d1 60053ee 1169945 79fcc0f 1169945 7c96e32 1169945 7cfd0d1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
license: cc-by-nc-4.0
---
A heavily curated dataset from recipe-nlg (source="**Gathered**" only). A lot of scraping artifacts, typographical errors, unicode, empty and very short recipes were removed.
Then it has been formated into Alpaca instruction set with Instructions, Input and Output. The total number of recipes went from ~2M2 (original dataset) to ~500K.
Obviously, it's still not perfect (I won't lie, the original dataset was very flawed). To fully fix this would require a very time-consuming manual edition, so you can consider it a WIP.
If you want to support me, you can [here](https://ko-fi.com/adamcodd).
## Token Distribution Analysis
We analyzed the token count distribution of the dataset using the LLAMA-3 tokenizer for the following Alpaca prompt:
```
"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}"""
```
### Key Statistics
- **Minimum tokens**: 164
- **Maximum tokens**: 3,285
- **Median (50th percentile)**: 274 tokens
### Decile Distribution
```ascii
| Percentile | Token Count |
|------------|-------------|
| 10% | 192 |
| 20% | 209 |
| 30% | 228 |
| 40% | 249 |
| 50% | 274 |
| 60% | 302 |
| 70% | 337 |
| 80% | 386 |
| 90% | 467 |
| 100% | 3,285 |
```
### Interpretation
1. **Range**: The dataset contains prompts ranging from 164 to 3,285 tokens.
2. **Central Tendency**: The median token count is 274, meaning half of the prompts have 274 tokens or fewer.
3. **Distribution**:
- 90% of prompts have 467 tokens or fewer.
- There's a notable jump from the 90th percentile (467 tokens) to the maximum (3,285 tokens), suggesting some outliers with very high token counts.
4. **Implications for Training**:
- A sequence length of 400-500 tokens would cover the majority of prompts.
- Special handling may be needed for outliers with high token counts (e.g., truncation or splitting).
## Acknowledgment and citation
```bibtex
@inproceedings{bien-etal-2020-recipenlg,
title = "{R}ecipe{NLG}: A Cooking Recipes Dataset for Semi-Structured Text Generation",
author = "Bie{\'n}, Micha{\l} and
Gilski, Micha{\l} and
Maciejewska, Martyna and
Taisner, Wojciech and
Wisniewski, Dawid and
Lawrynowicz, Agnieszka",
booktitle = "Proceedings of the 13th International Conference on Natural Language Generation",
month = dec,
year = "2020",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.inlg-1.4",
pages = "22--28",
}
``` |