File size: 1,374 Bytes
4227573
 
 
 
 
 
 
8d27ec7
 
 
4227573
 
 
 
314edb0
 
 
4227573
 
 
 
 
 
314edb0
 
4227573
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
language:
- en
---

# SmolLM3 Training Configs

**[IMPORTANT NOTE]**: for the latest configs go to this repo: https://github.com/huggingface/smollm/tree/main/text/pretraining/smollm3


Here you can find the training configs for [SmoLLM3-3B-Base](https://huggingface.co/HuggingFaceTB/SmolLM3-3B-Base) using [nanotron](https://github.com/huggingface/nanotron/) with exact training details and data mixtures. 

The model was trained on 11.2T tokens in 3 stages on 4k context: 

- stage 1 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/stage1_8T.yaml)
- stage 2 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/stage2_8T_9T.yaml)
- stage 3 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/stage3_9T_11T.yaml)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/944zWNgcI1I06RZuoP11B.png)


And then we trained on an additional 2 stages to extend the contetx length to 64k:

- stage 4 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/long_context_4k_to_32k.yaml)
- stage 5 [config](https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs/blob/main/long_context_32k_to_64.yaml)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/jBOiemVtbfi9YD7Pki6sY.png)