Qwen3-Inspired Pre-training Dataset
Overview
This dataset is a curated mixture of high-quality text data designed for large language model pre-training, inspired by the Qwen3 methodology.
Dataset Statistics
Total Size: 10.19 billion tokens
Data Sources
- dclm_baseline: 6.21B tokens (60.92%) - 4,973,695 documents
- mini_pile: 1.43B tokens (14.04%) - 999,249 documents
- common_corpus: 1.01B tokens (9.87%) - 246,160 documents
- the_stack: 0.96B tokens (9.4%) - 248,650 documents
- math_pile: 0.59B tokens (5.77%) - 66,729 documents
Data Processing Pipeline
- Data Collection: Sourced from multiple high-quality datasets
- Standardization: All data transformed to consistent format with
text
,info
, andsource_data
fields - Exact Deduplication: Removed identical documents
- Near Deduplication: Used MinHashLSH with Jaccard similarity threshold of 0.85
- Quality Filtering: Applied content-based filtering during processing
Data Format
Each example contains:
text
: The main text contentinfo
: Metadata from the original dataset (as string)source_data
: Source dataset identifier
Tokenization
Token counts were computed using the Llama3 tokenizer (meta-llama/Meta-Llama-3-8B
).
Usage
from datasets import load_dataset
dataset = load_dataset("bluelightai-dev/qwen_clt_pretrain_data")
Dataset Sources
The dataset combines data from the following sources:
- DCLM Baseline: High-quality web text from DataComp-LM
- Common Corpus: Multilingual web text corpus
- The Stack: Deduplicated source code
- Mini Pile: Academic and reference texts
- Math Pile: Mathematical content and reasoning datasets
License
Please refer to the individual source dataset licenses. This mixture is provided for research purposes.
Citation
If you use this dataset, please cite the original source datasets and this work.