dataset_info:
features:
- name: params
dtype: string
- name: data
dtype: string
- name: task
dtype: string
- name: step
dtype: int64
- name: seed
dtype: string
- name: chinchilla
dtype: string
- name: tokens
dtype: int64
- name: compute
dtype: float64
- name: metrics
dtype: string
splits:
- name: train
num_bytes: 1848365910
num_examples: 1410750
download_size: 693325464
dataset_size: 1848365910
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release DataDecide—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).
Evaluation
We evaluate all checkpoints over OLMES suite of 10 multiple choice question answering benchmarks (Gu et al., 2024):
- MMLU (Hendrycks et al., 2021)
- HellaSwag (Zellers et al., 2019)
- ARC-Challenge (Clark et al., 2018)
- ARC-Easy (Clark et al., 2018)
- PIQA (Bisk et al., 2020)
- CommonsenseQA (Talmor et al., 2019)
- Social IQa (Sap et al., 2019)
- OpenBookQA (Mihaylov et al., 2018)
- BoolQ (Clark et al., 2019)
- Winogrande (Sakaguchi et al., 2020)
We also release evaluations for instance-level results: https://huggingface.co/datasets/allenai/DataDecide-eval-instances
350 Models over Differences in Data in Scale
These evaluations are done over all DataDecide models. For each of our 25 datasets and 14 model sizes, we train a model linked below. Each has intermediate checkpoints (uploading after initial release), runs over 3 random seeds. All models finish training at a token to parameter ratio of 100 (e.g., 1B parameters -> 100B tokens).
Data
Source / Recipe | Description |
---|---|
Dolma1.7 Original, No code, No math/code, No Reddit, No Flan | A 2.3T-token corpus (Dolma; 1.7 Soldaini et al., 2024) sampling common LM sources for open research. We ablate code, math/code, Reddit, or Flan subsets. |
Dolma1.6++ Original | Dolma 1.6 plus additional sources from Dolma 1.7: RedPajama’s arxiv subset, openwebmath, algebraic stack, flan, starcoder, falcon. |
C4 Original | The C4 dataset (Raffel et al., 2019) as prepared in Dolma 1.7, heuristically filtered from the April 2019 Common Crawl. |
FineWeb-Pro Original | The FineWeb Pro corpus (Zhou et al., 2024), featuring model-driven data cleaning on FineWeb. |
FineWeb-Edu Original | The deduplicated FineWeb-Edu subset of SmoLLM-Corpus (Ben Allal et al., 2024), focused on educational web pages. |
Falcon Original | The Falcon RefinedWeb corpus (Penedo et al., 2023) in Dolma 1.7, derived from Common Crawl through June 2023 and more aggressively filtered/deduplicated than C4. |
Falcon+CC Original, QC 10%, QC 20%, QC Orig 10%, QC Tulu 10% | Falcon and Dolma 1.7’s Common Crawl. We quality filter to top 10% or 20% documents with reproduced or original Li et al. (2024) filter or retrain filter on pre-release version of Tulu-v3 (Lambert et al., 2024). |
DCLM-Baseline Original, QC 7% FW2, QC 7% FW3, QC FW 10%, QC 10%, QC 20% | A SOTA Common Crawl corpus using best ablated deduplication, cleaning heuristics, and quality filter. We quality filter to top 7% of DCLM classified documents and further take 2+ or 3+ scores with FineWeb-edu classifier; or filter to top 3% or 10% with FineWeb-edu classifier; or take top 10% or 20% with reproduced DCLM classifier. |
λ% DCLM-Baseline + 1 – λ% Dolma1.7 | Fractional combinations of Dolma1.7 and DCLM-Baseline mixing different proportions of the two datasets for λ ∈ {25%, 50%, 75%}. |
Dataset Description
- Developed by: Allen Institute for AI (Ai2)
- Language(s) (NLP): English
- License: This dataset is licensed under ODC-BY and intended for research and educational use in accordance with Ai2's Responsible Use Guidelines
- Contact: Technical inquiries:
[email protected]
. Press:[email protected]
Links
- Repository: https://github.com/allenai/DataDecide
- Paper: https:/allenai.org/papers/datadecide
Citation
BibTeX:
@article{MagnussonDataDecide2025,
title={{DataDecide: How to Predict Best Pretraining Data with Small Experiments}},
author={Ian Magnusson and Nguyen Tai and Ben Bogin and David Heineman and Jena Hwang and Luca Soldaini and Akshita Bhagia and Jiacheng Liu and Dirk Groeneveld and Oyvind Tafjord and Noah A. Smith and Pang Wei Koh and Jesse Dodge},
year={2025},
journal={arXiv preprint},
}