--- configs: - config_name: default data_files: - split: train path: "*.parquet" license: odc-by --- Gemstones Training Dataset - Sequential version This data is a reprocessed version of the first 1B rows of the Dolma v1.7 dataset (https://huggingface.co/datasets/allenai/dolma). The data is encoded using the Pythia tokenizer: https://huggingface.co/EleutherAI/pythia-160m **Disclaimer:** this is an approximation of the dataset used to train the Gemstones model suite. Due to the randomized and sharded nature of the distributed training code, the only way to perfectly reproduce the training batches across the gpus is to run the training code. This repo is the result of an attempt to simulate the way in which the training code loaded the data and stream it out to a portable file format for use in downstream analyses of the model suite. # Loading This data should be loadable using `load_dataset` in the standard manner to auto-download the data. Alternately, the dataset can be cloned using git to materialize the files locally, and then loaded using the default `parquet` builder as described here: https://huggingface.co/docs/datasets/en/loading#parquet # Sharding format: sequential This version of the dataset approximates the order of the dataset _as if_ a model was being trained on a single gpu without data parallelism. In reality, specific subsets of the data were loaded by the distributed workers (GPUs) passed through a local copy of the model during dataparallel training. Since the Gemstones suite of models was trained on a variety of topologies (the 50M models were trained on 8 nodes while the 2B models used 64 nodes) the distributed reading format was chosen such that different topologies would read the data in similar orders. Specifically, a round-robin reading order ensured that while an 8 node set of workers would each be responsible for more data than individual workers in a larger 64 node configuration, the first files read by the smaller configuration would be the same as the first files read by the workers in the larger configuration. Eg. if workers `1` and `2` in a 2 worker job got files `[A,B]` and `[C,D]`, then workers `1`, `2`, `3`, and `4` in a larger 4 worker job would receive files `[A]`, `[B]`, `[C]`, `[D]` respectively. This way, periodically, all models would be guaranteed to have seen all of the same rows of the dataset during training. The sync granularity is determined by the largest configuration, so 64 nodes = 512 gpus, loading 4 raw files at a time each containing 2048 x 2049 = ~4M tokens, means synchronization every 512 x 4 x 2048 x 2049 = ~8.6B tokens. This linearized recreation assumes a single worker is reading every row of the dataset and so at a microbatch size of 8 over packed sequences of 2048 tokens, 21247488 steps worth of "training" is required to reach ~350B tokens. In this setup, a single worker received the total dataset represented by the thousands of raw training format files (for reference, this format is defined by the `packed_cycle_dataset.py` file in this repo). The raw files were first shuffled globally, and then the single worker loaded 4 files at a time, and shuffled the "blocks" of 2048 tokens each in a temporary buffer so that the contents of the 4 packed files were not read in the exact order in which the tokens appeared in them. **Note**: the fact that a single worker receives all files in this version means that the sets of 4 files loaded at a time whose contents (blocks of tokens) are read in a shuffled order, does not exactly match any one of the Gemstones model sets. However, the key is that the synchronization argument above still holds and so analyses at a coarser granularity than ~8.6B tokens should be sound. The `train_mock_data_order_file.py` performs these operations and writes the resulting data order out to files. Each shard named like `ordered_dataset_shard_{shard}-of-{total_shards}.parquet` where the total number of shards is arbitrary, but chosen to be 256 for portability, represents a contiguous subset of the approximated total ordering of the rows int the training dataset.