Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- fill-mask
|
5 |
+
tags:
|
6 |
+
- pretraining
|
7 |
+
- encoder
|
8 |
+
- multilingual
|
9 |
+
---
|
10 |
+
|
11 |
+
# mmBERT Mid-training Data
|
12 |
+
|
13 |
+
[](https://opensource.org/licenses/MIT)
|
14 |
+
[](https://arxiv.org/abs/2509.06888)
|
15 |
+
[](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)
|
16 |
+
[](https://github.com/jhu-clsp/mmBERT)
|
17 |
+
|
18 |
+
> **Phase 2 of 3**: High-quality mid-training data mixture (600B tokens) with context extension to 8192 tokens.
|
19 |
+
|
20 |
+
This dataset contains the mid-training phase data used to train all [mmBERT encoder models](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4). This phase focuses on higher quality data sources and extends the context length from 1024 to 8192 tokens. The data is provided in **MDS format** ready for use with [Composer](https://github.com/mosaicml/composer) and the [ModernBERT training repository](https://github.com/answerdotai/ModernBERT).
|
21 |
+
|
22 |
+
## 📊 Data Composition
|
23 |
+
|
24 |
+
| Data Source | Tokens (B) | Percentage | Description |
|
25 |
+
|:------------|:-----------|:-----------|:------------|
|
26 |
+
| FineWeb2 | 506.7 | 84.3% | High-quality multilingual web crawl data |
|
27 |
+
| DCLM (Dolmino) | 40.0 | 6.7% | Filtered high-quality English web data |
|
28 |
+
| Starcoder | 17.2 | 2.9% | Code repositories and files |
|
29 |
+
| Arxiv | 5.4 | 0.9% | Academic preprints |
|
30 |
+
| Dolmino Math | 4.3 | 0.7% | Mathematical content |
|
31 |
+
| Books | 3.9 | 0.7% | Literature and reference books |
|
32 |
+
| PeS2o | 3.2 | 0.5% | Scientific papers |
|
33 |
+
| Tulu Flan | 3.1 | 0.5% | Instruction-following data |
|
34 |
+
| StackExchange | 3.0 | 0.5% | Q&A forums |
|
35 |
+
| StackExchange (Dolmino) | 2.8 | 0.5% | Curated Q&A content |
|
36 |
+
| Wikipedia (MegaWika) | 1.2 | 0.2% | Encyclopedia articles |
|
37 |
+
| **Total** | **600.8** | **100.0%** | High-quality data for context extension |
|
38 |
+
|
39 |
+
## 🌍 Language Coverage
|
40 |
+
|
41 |
+
This phase covers **110 languages** plus code, with inverse temperature sampling at τ=0.5. Expands from the initial 60 languages to include:
|
42 |
+
- **Additional mid-resource languages**: Uzbek, Bosnian, Catalan, Albanian, and 46 others
|
43 |
+
- **Enhanced quality**: Uses filtered FineWeb2-HQ and higher quality DCLM
|
44 |
+
- **Longer contexts**: Optimized for 8192 token sequences
|
45 |
+
|
46 |
+
## ⚙️ Key Features
|
47 |
+
|
48 |
+
- **Context Extension**: RoPE base frequency adjusted to 160k for 8192 token support
|
49 |
+
- **Quality Upgrade**: Switches to filtered, higher-quality versions of datasets
|
50 |
+
- **Reduced Masking**: Mask rate lowered to 15% (from 30% in pre-training)
|
51 |
+
- **Language Expansion**: Adds 50 new languages while maintaining data quality
|
52 |
+
|
53 |
+
## 🚀 Usage
|
54 |
+
|
55 |
+
For mid-training, see the ModernBERT repo: https://github.com/AnswerDotAI/ModernBERT
|
56 |
+
|
57 |
+
### Direct Access
|
58 |
+
|
59 |
+
```python
|
60 |
+
from streaming import StreamingDataset
|
61 |
+
|
62 |
+
# Load the streaming dataset
|
63 |
+
dataset = StreamingDataset(
|
64 |
+
remote='https://huggingface.co/datasets/jhu-clsp/mmbert-midtraining',
|
65 |
+
local='/tmp/mmbert-midtraining-data',
|
66 |
+
shuffle=True
|
67 |
+
)
|
68 |
+
|
69 |
+
# Access samples
|
70 |
+
for sample in dataset:
|
71 |
+
text = sample['text']
|
72 |
+
# Process your data...
|
73 |
+
```
|
74 |
+
|
75 |
+
## 🔗 Related Resources
|
76 |
+
|
77 |
+
- **Models**: [mmBERT Model Suite](https://huggingface.co/collections/jhu-clsp/mmbert-a-modern-multilingual-encoder-68b725831d7c6e3acc435ed4)
|
78 |
+
- **Phase 1**: [Pre-training Data](https://huggingface.co/datasets/jhu-clsp/mmbert-pretrain-p1-fineweb2-langs) (2.3T tokens)
|
79 |
+
- **Phase 3**: [Decay Phase Data](https://huggingface.co/datasets/jhu-clsp/mmbert-decay) (100B tokens)
|
80 |
+
- **Checkpoints**: [Training Checkpoints](https://huggingface.co/datasets/jhu-clsp/mmbert-checkpoints)
|
81 |
+
- **Paper**: [Arxiv link](https://arxiv.org/abs/2509.06888)
|
82 |
+
- **Code**: [GitHub Repository](https://github.com/jhu-clsp/mmBERT)
|
83 |
+
|
84 |
+
## Citation
|
85 |
+
|
86 |
+
```bibtex
|
87 |
+
@misc{marone2025mmbertmodernmultilingualencoder,
|
88 |
+
title={mmBERT: A Modern Multilingual Encoder with Annealed Language Learning},
|
89 |
+
author={Marc Marone and Orion Weller and William Fleshman and Eugene Yang and Dawn Lawrie and Benjamin Van Durme},
|
90 |
+
year={2025},
|
91 |
+
eprint={2509.06888},
|
92 |
+
archivePrefix={arXiv},
|
93 |
+
primaryClass={cs.CL},
|
94 |
+
url={https://arxiv.org/abs/2509.06888},
|
95 |
+
}
|
96 |
+
```
|