orionweller's picture
Update README.md
3d27d97 verified
metadata
license: mit
task_categories:
  - fill-mask
tags:
  - pretraining
  - encoder
  - multilingual

mmBERT Training Data (Ready-to-Use)

License: MIT Paper Models GitHub

Complete Training Dataset: Pre-randomized and ready-to-use multilingual training data (3T tokens) for encoder model pre-training.

This dataset is part of the complete, pre-shuffled training data used to train the mmBERT encoder models. Unlike the individual phase datasets, this version is ready for immediate use but the mixture cannot be modified easily. The data is provided in decompressed MDS format ready for use with ModernBERT's Composer and the ModernBERT training repository.

Licensing & Attribution

This dataset aggregates multiple open-source datasets under permissive licenses. See individual source datasets for specific attribution requirements.

Related Resources

Citation

@misc{marone2025mmbertmodernmultilingualencoder,
      title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning}, 
      author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
      year={2025},
      eprint={2509.06888},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2509.06888}, 
}