Datasets:

ArXiv:
License:
orionweller's picture
Create README.md
2fa40b5 verified
metadata
license: mit
task_categories:
  - fill-mask
tags:
  - pretraining
  - encoder
  - multilingual

mmBERT Mid-training Data

License: MIT Paper Models GitHub

Phase 2 of 3: High-quality mid-training data mixture (600B tokens) with context extension to 8192 tokens.

This dataset contains the mid-training phase data used to train all mmBERT encoder models. This phase focuses on higher quality data sources and extends the context length from 1024 to 8192 tokens. The data is provided in MDS format ready for use with Composer and the ModernBERT training repository.

πŸ“Š Data Composition

Data Source Tokens (B) Percentage Description
FineWeb2 506.7 84.3% High-quality multilingual web crawl data
DCLM (Dolmino) 40.0 6.7% Filtered high-quality English web data
Starcoder 17.2 2.9% Code repositories and files
Arxiv 5.4 0.9% Academic preprints
Dolmino Math 4.3 0.7% Mathematical content
Books 3.9 0.7% Literature and reference books
PeS2o 3.2 0.5% Scientific papers
Tulu Flan 3.1 0.5% Instruction-following data
StackExchange 3.0 0.5% Q&A forums
StackExchange (Dolmino) 2.8 0.5% Curated Q&A content
Wikipedia (MegaWika) 1.2 0.2% Encyclopedia articles
Total 600.8 100.0% High-quality data for context extension

🌍 Language Coverage

This phase covers 110 languages plus code, with inverse temperature sampling at Ο„=0.5. Expands from the initial 60 languages to include:

  • Additional mid-resource languages: Uzbek, Bosnian, Catalan, Albanian, and 46 others
  • Enhanced quality: Uses filtered FineWeb2-HQ and higher quality DCLM
  • Longer contexts: Optimized for 8192 token sequences

βš™οΈ Key Features

  • Context Extension: RoPE base frequency adjusted to 160k for 8192 token support
  • Quality Upgrade: Switches to filtered, higher-quality versions of datasets
  • Reduced Masking: Mask rate lowered to 15% (from 30% in pre-training)
  • Language Expansion: Adds 50 new languages while maintaining data quality

πŸš€ Usage

For mid-training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT

Direct Access

from streaming import StreamingDataset

# Load the streaming dataset
dataset = StreamingDataset(
    remote='https://huggingface.co/datasets/jhu-clsp/mmbert-midtraining',
    local='/tmp/mmbert-midtraining-data',
    shuffle=True
)

# Access samples
for sample in dataset:
    text = sample['text']
    # Process your data...

πŸ”— Related Resources

Citation

@misc{marone2025mmbertmodernmultilingualencoder,
      title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning}, 
      author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2509.06888},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.06888}, 
}