recipe-nlg-alpaca / README.md
AdamCodd's picture
Update README.md
79fcc0f verified
metadata
license: cc-by-nc-4.0

A heavily curated dataset from recipe-nlg (source="Gathered" only). A lot of scraping artifacts, typographical errors, unicode, empty and very short recipes were removed.

Then it has been formated into Alpaca instruction set with Instructions, Input and Output. The total number of recipes went from ~2M2 (original dataset) to ~500K. Obviously, it's still not perfect (I won't lie, the original dataset was very flawed). To fully fix this would require a very time-consuming manual edition, so you can consider it a WIP.

If you want to support me, you can here.

Token Distribution Analysis

We analyzed the token count distribution of the dataset using the LLAMA-3 tokenizer for the following Alpaca prompt:

"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

Key Statistics

  • Minimum tokens: 164
  • Maximum tokens: 3,285
  • Median (50th percentile): 274 tokens

Decile Distribution

| Percentile | Token Count |
|------------|-------------|
| 10%        | 192         |
| 20%        | 209         |
| 30%        | 228         |
| 40%        | 249         |
| 50%        | 274         |
| 60%        | 302         |
| 70%        | 337         |
| 80%        | 386         |
| 90%        | 467         |
| 100%       | 3,285       |

Interpretation

  1. Range: The dataset contains prompts ranging from 164 to 3,285 tokens.

  2. Central Tendency: The median token count is 274, meaning half of the prompts have 274 tokens or fewer.

  3. Distribution:

    • 90% of prompts have 467 tokens or fewer.
    • There's a notable jump from the 90th percentile (467 tokens) to the maximum (3,285 tokens), suggesting some outliers with very high token counts.
  4. Implications for Training:

    • A sequence length of 400-500 tokens would cover the majority of prompts.
    • Special handling may be needed for outliers with high token counts (e.g., truncation or splitting).

Acknowledgment and citation

@inproceedings{bien-etal-2020-recipenlg,
    title = "{R}ecipe{NLG}: A Cooking Recipes Dataset for Semi-Structured Text Generation",
    author = "Bie{\'n}, Micha{\l}  and
      Gilski, Micha{\l}  and
      Maciejewska, Martyna  and
      Taisner, Wojciech  and
      Wisniewski, Dawid  and
      Lawrynowicz, Agnieszka",
    booktitle = "Proceedings of the 13th International Conference on Natural Language Generation",
    month = dec,
    year = "2020",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.inlg-1.4",
    pages = "22--28",
}