Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
IanMagnusson's picture
fix paper link
5b9e7f5 verified
metadata
dataset_info:
  features:
    - name: params
      dtype: string
    - name: data
      dtype: string
    - name: task
      dtype: string
    - name: step
      dtype: int64
    - name: seed
      dtype: string
    - name: chinchilla
      dtype: string
    - name: tokens
      dtype: int64
    - name: compute
      dtype: float64
    - name: metrics
      dtype: string
  splits:
    - name: train
      num_bytes: 1848365910
      num_examples: 1410750
  download_size: 693325464
  dataset_size: 1848365910
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: odc-by

image/png

More than one training run goes into making a large language model, but developers rarely release the small models and datasets they experiment with during the development process. How do they decide what dataset to use for pretraining or which benchmarks to hill climb on? To empower open exploration of these questions, we release DataDecide—a suite of models we pretrain on 25 corpora with differing sources, deduplication, and filtering up to 100B tokens, over 14 different model sizes ranging from 4M parameters up to 1B parameters (more than 30k model checkpoints in total).

Evaluation

We evaluate all checkpoints over OLMES suite of 10 multiple choice question answering benchmarks (Gu et al., 2024):

We also release evaluations for instance-level results: https://huggingface.co/datasets/allenai/DataDecide-eval-instances

350 Models over Differences in Data in Scale

These evaluations are done over all DataDecide models. For each of our 25 datasets and 14 model sizes, we train a model linked below. Each has intermediate checkpoints (uploading after initial release), runs over 3 random seeds. All models finish training at a token to parameter ratio of 100 (e.g., 1B parameters -> 100B tokens).

Dolma1.7 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no code) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no math, code) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no Reddit) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.7 (no Flan) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Dolma1.6++ 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
C4 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
FineWeb-Pro 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
FineWeb-Edu 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC 20%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC Orig 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
Falcon+CC (QC Tulu 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 7%, FW2) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 7%, FW3) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC FW 3%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC FW 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 10%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline (QC 20%) 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 25% / Dolma 75% 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 50% / Dolma 50% 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B
DCLM-Baseline 75% / Dolma 25% 4M 6M 8M 10M 14M 16M 20M 60M 90M 150M 300M 530M 750M 1B

Data

Source / Recipe Description
Dolma1.7 Original, No code, No math/code, No Reddit, No Flan A 2.3T-token corpus (Dolma; 1.7 Soldaini et al., 2024) sampling common LM sources for open research. We ablate code, math/code, Reddit, or Flan subsets.
Dolma1.6++ Original Dolma 1.6 plus additional sources from Dolma 1.7: RedPajama’s arxiv subset, openwebmath, algebraic stack, flan, starcoder, falcon.
C4 Original The C4 dataset (Raffel et al., 2019) as prepared in Dolma 1.7, heuristically filtered from the April 2019 Common Crawl.
FineWeb-Pro Original The FineWeb Pro corpus (Zhou et al., 2024), featuring model-driven data cleaning on FineWeb.
FineWeb-Edu Original The deduplicated FineWeb-Edu subset of SmoLLM-Corpus (Ben Allal et al., 2024), focused on educational web pages.
Falcon Original The Falcon RefinedWeb corpus (Penedo et al., 2023) in Dolma 1.7, derived from Common Crawl through June 2023 and more aggressively filtered/deduplicated than C4.
Falcon+CC Original, QC 10%, QC 20%, QC Orig 10%, QC Tulu 10% Falcon and Dolma 1.7’s Common Crawl. We quality filter to top 10% or 20% documents with reproduced or original Li et al. (2024) filter or retrain filter on pre-release version of Tulu-v3 (Lambert et al., 2024).
DCLM-Baseline Original, QC 7% FW2, QC 7% FW3, QC FW 10%, QC 10%, QC 20% A SOTA Common Crawl corpus using best ablated deduplication, cleaning heuristics, and quality filter. We quality filter to top 7% of DCLM classified documents and further take 2+ or 3+ scores with FineWeb-edu classifier; or filter to top 3% or 10% with FineWeb-edu classifier; or take top 10% or 20% with reproduced DCLM classifier.
λ% DCLM-Baseline + 1 – λ% Dolma1.7 Fractional combinations of Dolma1.7 and DCLM-Baseline mixing different proportions of the two datasets for λ ∈ {25%, 50%, 75%}.

Dataset Description

Links

Citation

BibTeX:

@article{MagnussonDataDecide2025,
      title={{DataDecide: How to Predict Best Pretraining Data with Small Experiments}},
      author={Ian Magnusson and Nguyen Tai and Ben Bogin and David Heineman and Jena Hwang and Luca Soldaini and Akshita Bhagia and Jiacheng Liu and Dirk Groeneveld and Oyvind Tafjord and Noah A. Smith and Pang Wei Koh and Jesse Dodge},
      year={2025},
      journal={arXiv preprint},
}