Files changed (1) hide show
  1. README.md +175 -56
README.md CHANGED
@@ -1,57 +1,176 @@
1
- ---
2
- tags:
3
- - unsloth
4
- base_model:
5
- - Qwen/Qwen3-8B-Base
6
- ---
7
- # Qwen3-8B-Base
8
-
9
- ## Qwen3 Highlights
10
-
11
- Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
12
- Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
13
-
14
- - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
15
- - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
16
- - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
17
- - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
18
-
19
- ## Model Overview
20
-
21
- **Qwen3-8B-Base** has the following features:
22
- - Type: Causal Language Models
23
- - Training Stage: Pretraining
24
- - Number of Parameters: 8.2B
25
- - Number of Paramaters (Non-Embedding): 6.95B
26
- - Number of Layers: 36
27
- - Number of Attention Heads (GQA): 32 for Q and 8 for KV
28
- - Context Length: 32,768
29
-
30
- For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
31
-
32
- ## Requirements
33
-
34
- The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
35
-
36
- With `transformers<4.51.0`, you will encounter the following error:
37
- ```
38
- KeyError: 'qwen3'
39
- ```
40
-
41
- ## Evaluation & Performance
42
-
43
- Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen3/).
44
-
45
- ### Citation
46
-
47
- If you find our work helpful, feel free to give us a cite.
48
-
49
- ```
50
- @misc{qwen3,
51
- title = {Qwen3},
52
- url = {https://qwenlm.github.io/blog/qwen3/},
53
- author = {Qwen Team},
54
- month = {April},
55
- year = {2025}
56
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ```
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ base_model:
5
+ - Qwen/Qwen3-8B-Base
6
+ language:
7
+ - eng
8
+ - fra
9
+ - por
10
+ - deu
11
+ - ron
12
+ - swe
13
+ - dan
14
+ - bul
15
+ - rus
16
+ - ces
17
+ - ell
18
+ - ukr
19
+ - spa
20
+ - nld
21
+ - slk
22
+ - hrv
23
+ - pol
24
+ - lit
25
+ - nob
26
+ - nno
27
+ - fas
28
+ - slv
29
+ - guj
30
+ - lav
31
+ - ita
32
+ - oci
33
+ - nep
34
+ - mar
35
+ - bel
36
+ - srp
37
+ - ltz
38
+ - vec
39
+ - asm
40
+ - cym
41
+ - szl
42
+ - ast
43
+ - hne
44
+ - awa
45
+ - mai
46
+ - bho
47
+ - snd
48
+ - gle
49
+ - fao
50
+ - hin
51
+ - pan
52
+ - ben
53
+ - ori
54
+ - tgk
55
+ - ydd
56
+ - lmo
57
+ - lij
58
+ - scn
59
+ - fur
60
+ - srd
61
+ - glg
62
+ - cat
63
+ - isl
64
+ - als
65
+ - lim
66
+ - prs
67
+ - afr
68
+ - mkd
69
+ - sin
70
+ - urd
71
+ - mag
72
+ - bos
73
+ - hye
74
+ - zho
75
+ - yue
76
+ - mya
77
+ - ara
78
+ - ars
79
+ - apc
80
+ - arz
81
+ - ary
82
+ - acm
83
+ - acq
84
+ - aeb
85
+ - heb
86
+ - mlt
87
+ - ind
88
+ - zsm
89
+ - tgl
90
+ - ceb
91
+ - jav
92
+ - sun
93
+ - min
94
+ - ban
95
+ - bjn
96
+ - pag
97
+ - ilo
98
+ - war
99
+ - tam
100
+ - tel
101
+ - kan
102
+ - mal
103
+ - tur
104
+ - azj
105
+ - uzn
106
+ - kaz
107
+ - bak
108
+ - tat
109
+ - tha
110
+ - lao
111
+ - fin
112
+ - est
113
+ - hun
114
+ - vie
115
+ - khm
116
+ - jpn
117
+ - kor
118
+ - kat
119
+ - eus
120
+ - hat
121
+ - pap
122
+ - kea
123
+ - tpi
124
+ - swa
125
+ ---
126
+ # Qwen3-8B-Base
127
+
128
+ ## Qwen3 Highlights
129
+
130
+ Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.
131
+ Building upon extensive advancements in training data, model architecture, and optimization techniques, Qwen3 delivers the following key improvements over the previously released Qwen2.5:
132
+
133
+ - **Expanded Higher-Quality Pre-training Corpus:** Qwen3 is pre-trained on 36 trillion tokens across 119 languages β€” tripling the language coverage of Qwen2.5 β€” with a much richer mix of high-quality data, including coding, STEM, reasoning, book, multilingual, and synthetic data.
134
+ - **Training Techniques and Model Architecture:** Qwen3 incorporates a series of training techiques and architectural refinements, including global-batch load balancing loss for MoE models and qk layernorm for all models, leading to improved stability and overall performance.
135
+ - **Three-stage Pre-training:** Stage 1 focuses on broad language modeling and general knowledge acquisition, Stage 2 improves reasoning skills like STEM, coding, and logical reasoning, and Stage 3 enhances long-context comprehension by extending training sequence lengths up to 32k tokens.
136
+ - **Scaling Law Guided Hyperparameter Tuning:** Through comprehensive scaling law studies across the three-stage pre-training pipeline, Qwen3 systematically tunes critical hyperparameters β€” such as learning rate scheduler and batch size β€” separately for dense and MoE models, resulting in better training dynamics and final performance across different model scales.
137
+
138
+ ## Model Overview
139
+
140
+ **Qwen3-8B-Base** has the following features:
141
+ - Type: Causal Language Models
142
+ - Training Stage: Pretraining
143
+ - Number of Parameters: 8.2B
144
+ - Number of Paramaters (Non-Embedding): 6.95B
145
+ - Number of Layers: 36
146
+ - Number of Attention Heads (GQA): 32 for Q and 8 for KV
147
+ - Context Length: 32,768
148
+
149
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
150
+
151
+ ## Requirements
152
+
153
+ The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
154
+
155
+ With `transformers<4.51.0`, you will encounter the following error:
156
+ ```
157
+ KeyError: 'qwen3'
158
+ ```
159
+
160
+ ## Evaluation & Performance
161
+
162
+ Detailed evaluation results are reported in this [πŸ“‘ blog](https://qwenlm.github.io/blog/qwen3/).
163
+
164
+ ### Citation
165
+
166
+ If you find our work helpful, feel free to give us a cite.
167
+
168
+ ```
169
+ @misc{qwen3,
170
+ title = {Qwen3},
171
+ url = {https://qwenlm.github.io/blog/qwen3/},
172
+ author = {Qwen Team},
173
+ month = {April},
174
+ year = {2025}
175
+ }
176
  ```