Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
This repository contains the DCLM-10B-Qwen2-binidx dataset, a large-scale text corpus used as training data for the RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale protocol.
RADLADS proposes a method for rapidly converting traditional softmax attention transformers into efficient linear attention decoder models. This dataset is crucial for the distillation process, enabling the conversion of large language models like Qwen2.5 into linear attention variants with minimal training tokens while maintaining high inference quality.
For more details on the RADLADS project, including the training code and converted models, please refer to the official GitHub repository: https://github.com/recursal/RADLADS
Sample Usage
You can download the dclm-10B.idx
and dclm-10B.bin
dataset files using wget
as follows:
mkdir -p data
wget --continue -O data/dclm-10B.idx https://huggingface.co/datasets/recursal/DCLM-10B-Qwen2-binidx/resolve/main/dclm-10B.idx?download=true
wget --continue -O data/dclm-10B.bin https://huggingface.co/datasets/recursal/DCLM-10B-Qwen2-binidx/resolve/main/dclm-10B.bin?download=true
Citation
If you use this dataset or find the RADLADS work valuable, please consider citing the associated paper:
@misc{goldstein2025radladsrapidattentiondistillation,
title={RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale},
author={Daniel Goldstein and Eric Alcaide and Janna Lu and Eugene Cheah},
year={2025},
eprint={2505.03005},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.03005},
}
- Downloads last month
- 30