ramblingpolymath commited on
Commit
04b608b
·
verified ·
1 Parent(s): f94dbaf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +354 -0
README.md ADDED
@@ -0,0 +1,354 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ pipeline_tag: text-generation
5
+ base_model: Qwen/Qwen3-30B-A3B-Thinking-2507
6
+ tags:
7
+ - quantized
8
+ - w4a16
9
+ - llm-compressor
10
+ ---
11
+ ```
12
+ ██╗ ██╗██╗ ██╗ █████╗ ██╗ ██████╗
13
+ ██║ ██║██║ ██║██╔══██╗███║██╔════╝
14
+ ██║ █╗ ██║███████║███████║╚██║███████╗
15
+ ██║███╗██║╚════██║██╔══██║ ██║██╔═══██╗
16
+ ╚███╔███╔╝ ██║██║ ██║ ██║╚██████╔╝
17
+ ╚══╝╚══╝ ╚═╝╚═╝ ╚═╝ ╚═╝ ╚═════╝
18
+ 🗜️ COMPRESSED & OPTIMIZED 🚀
19
+ ```
20
+
21
+ # Qwen3-30B-A3B-Thinking-2507 - W4A16 Quantized
22
+
23
+ W4A16 (4-bit weights, 16-bit activations) quantized version of Qwen/Qwen3-30B-A3B-Thinking-2507 using **LLM-Compressor**.
24
+
25
+ - 🗜️ **Memory**: ~75% reduction vs FP16
26
+ - 🚀 **Speed**: Faster inference on supported hardware
27
+ - 🔗 **Original model**: [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)
28
+
29
+ <details>
30
+ <summary>Click to view compression code</summary>
31
+
32
+ ```python
33
+ from datasets import load_dataset
34
+ from llmcompressor.modifiers.quantization import GPTQModifier
35
+ from llmcompressor import oneshot
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer
37
+
38
+ # Load model with memory management
39
+ model_stub = "Qwen/Qwen3-30B-A3B-Thinking-2507"
40
+ model_name = model_stub.split("/")[-1]
41
+
42
+ # Use conservative parameters
43
+ num_samples = 1024
44
+ max_seq_len = 8192
45
+
46
+ print(f"Loading model: {model_stub}")
47
+ model = AutoModelForCausalLM.from_pretrained(
48
+ model_stub,
49
+ torch_dtype="auto",
50
+ device_map="auto",
51
+ low_cpu_mem_usage=True,
52
+ max_memory={0: "44GB", "cpu": "55GB"},
53
+ )
54
+
55
+ print("Loading tokenizer...")
56
+ tokenizer = AutoTokenizer.from_pretrained(model_stub)
57
+
58
+ print("Loading calibration dataset...")
59
+ def preprocess_fn(example):
60
+ return {"text": tokenizer.apply_chat_template(
61
+ example["messages"],
62
+ add_generation_prompt=False,
63
+ tokenize=False
64
+ )}
65
+
66
+ # Load dataset and preprocess
67
+ ds = load_dataset("neuralmagic/LLM_compression_calibration", split=f"train[:{num_samples}]")
68
+ ds = ds.map(preprocess_fn)
69
+ ds = ds.shuffle(seed=42)
70
+
71
+ # Tokenize the dataset
72
+ def tokenize(sample):
73
+ return tokenizer(
74
+ sample["text"],
75
+ padding=False,
76
+ max_length=max_seq_len,
77
+ truncation=True,
78
+ add_special_tokens=False,
79
+ )
80
+
81
+ print("Tokenizing dataset...")
82
+ ds = ds.map(tokenize, remove_columns=ds.column_names)
83
+
84
+ # Configure GPTQ with proper Qwen3 MoE ignore patterns
85
+ print("Configuring quantization recipe...")
86
+ recipe = GPTQModifier(
87
+ targets="Linear",
88
+ scheme="W4A16",
89
+ ignore=["lm_head", "re:.*mlp.gate$"], # Qwen3 MoE pattern (no shared experts)
90
+ dampening_frac=0.01,
91
+ # Remove sequential_targets - let llmcompressor handle automatically
92
+ )
93
+
94
+ # Apply quantization
95
+ print("Starting quantization process...")
96
+ oneshot(
97
+ model=model,
98
+ dataset=ds,
99
+ recipe=recipe,
100
+ max_seq_length=max_seq_len,
101
+ num_calibration_samples=num_samples,
102
+ )
103
+
104
+ # Save quantized model
105
+ save_path = model_name + "-gptq-w4a16"
106
+ print(f"Saving model to: {save_path}")
107
+ model.save_pretrained(save_path, save_compressed=True)
108
+ tokenizer.save_pretrained(save_path)
109
+
110
+ print("Quantization completed successfully!")
111
+ ```
112
+
113
+ </details>
114
+
115
+ ---
116
+
117
+ ## 📄 Original Model README
118
+
119
+ # Qwen3-30B-A3B-Thinking-2507
120
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
121
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
122
+ </a>
123
+
124
+ ## Highlights
125
+
126
+ Over the past three months, we have continued to scale the **thinking capability** of Qwen3-30B-A3B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-30B-A3B-Thinking-2507**, featuring the following key enhancements:
127
+
128
+ - **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise.
129
+ - **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
130
+ - **Enhanced 256K long-context understanding** capabilities.
131
+
132
+ **NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
133
+
134
+ ![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-2507/Qwen3-30B-A3B-Thinking-2507.jpeg)
135
+
136
+ ## Model Overview
137
+
138
+ **Qwen3-30B-A3B-Thinking-2507** has the following features:
139
+ - Type: Causal Language Models
140
+ - Training Stage: Pretraining & Post-training
141
+ - Number of Parameters: 30.5B in total and 3.3B activated
142
+ - Number of Paramaters (Non-Embedding): 29.9B
143
+ - Number of Layers: 48
144
+ - Number of Attention Heads (GQA): 32 for Q and 4 for KV
145
+ - Number of Experts: 128
146
+ - Number of Activated Experts: 8
147
+ - Context Length: **262,144 natively**.
148
+
149
+ **NOTE: This model supports only thinking mode. Meanwhile, specifying `enable_thinking=True` is no longer required.**
150
+
151
+ Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
152
+
153
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
154
+
155
+ ## Performance
156
+
157
+ | | Gemini2.5-Flash-Thinking | Qwen3-235B-A22B Thinking | Qwen3-30B-A3B Thinking | Qwen3-30B-A3B-Thinking-2507 |
158
+ |--- | --- | --- | --- | --- |
159
+ | **Knowledge** | | | | |
160
+ | MMLU-Pro | 81.9 | **82.8** | 78.5 | 80.9 |
161
+ | MMLU-Redux | 92.1 | **92.7** | 89.5 | 91.4 |
162
+ | GPQA | **82.8** | 71.1 | 65.8 | 73.4 |
163
+ | SuperGPQA | 57.8 | **60.7** | 51.8 | 56.8 |
164
+ | **Reasoning** | | | | |
165
+ | AIME25 | 72.0 | 81.5 | 70.9 | **85.0** |
166
+ | HMMT25 | 64.2 | 62.5 | 49.8 | **71.4** |
167
+ | LiveBench 20241125 | 74.3 | **77.1** | 74.3 | 76.8 |
168
+ | **Coding** | | | | |
169
+ | LiveCodeBench v6 (25.02-25.05) | 61.2 | 55.7 | 57.4 | **66.0** |
170
+ | CFEval | 1995 | **2056** | 1940 | 2044 |
171
+ | OJBench | 23.5 | **25.6** | 20.7 | 25.1 |
172
+ | **Alignment** | | | | |
173
+ | IFEval | **89.8** | 83.4 | 86.5 | 88.9 |
174
+ | Arena-Hard v2$ | 56.7 | **61.5** | 36.3 | 56.0 |
175
+ | Creative Writing v3 | **85.0** | 84.6 | 79.1 | 84.4 |
176
+ | WritingBench | 83.9 | 80.3 | 77.0 | **85.0** |
177
+ | **Agent** | | | | |
178
+ | BFCL-v3 | 68.6 | 70.8 | 69.1 | **72.4** |
179
+ | TAU1-Retail | 65.2 | 54.8 | 61.7 | **67.8** |
180
+ | TAU1-Airline | **54.0** | 26.0 | 32.0 | 48.0 |
181
+ | TAU2-Retail | **66.7** | 40.4 | 34.2 | 58.8 |
182
+ | TAU2-Airline | 52.0 | 30.0 | 36.0 | **58.0** |
183
+ | TAU2-Telecom | **31.6** | 21.9 | 22.8 | 26.3 |
184
+ | **Multilingualism** | | | | |
185
+ | MultiIF | 74.4 | 71.9 | 72.2 | **76.4** |
186
+ | MMLU-ProX | **80.2** | 80.0 | 73.1 | 76.4 |
187
+ | INCLUDE | **83.9** | 78.7 | 71.9 | 74.4 |
188
+ | PolyMATH | 49.8 | **54.7** | 46.1 | 52.6 |
189
+
190
+ $ For reproducibility, we report the win rates evaluated by GPT-4.1.
191
+
192
+ \& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.
193
+
194
+
195
+ ## Quickstart
196
+
197
+ The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
198
+
199
+ With `transformers<4.51.0`, you will encounter the following error:
200
+ ```
201
+ KeyError: 'qwen3_moe'
202
+ ```
203
+
204
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
205
+ ```python
206
+ from transformers import AutoModelForCausalLM, AutoTokenizer
207
+
208
+ model_name = "Qwen/Qwen3-30B-A3B-Thinking-2507"
209
+
210
+ # load the tokenizer and the model
211
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
212
+ model = AutoModelForCausalLM.from_pretrained(
213
+ model_name,
214
+ torch_dtype="auto",
215
+ device_map="auto"
216
+ )
217
+
218
+ # prepare the model input
219
+ prompt = "Give me a short introduction to large language model."
220
+ messages = [
221
+ {"role": "user", "content": prompt}
222
+ ]
223
+ text = tokenizer.apply_chat_template(
224
+ messages,
225
+ tokenize=False,
226
+ add_generation_prompt=True,
227
+ )
228
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
229
+
230
+ # conduct text completion
231
+ generated_ids = model.generate(
232
+ **model_inputs,
233
+ max_new_tokens=32768
234
+ )
235
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
236
+
237
+ # parsing thinking content
238
+ try:
239
+ # rindex finding 151668 (</think>)
240
+ index = len(output_ids) - output_ids[::-1].index(151668)
241
+ except ValueError:
242
+ index = 0
243
+
244
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
245
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
246
+
247
+ print("thinking content:", thinking_content) # no opening <think> tag
248
+ print("content:", content)
249
+
250
+ ```
251
+
252
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
253
+ - SGLang:
254
+ ```shell
255
+ python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B-Thinking-2507 --context-length 262144 --reasoning-parser deepseek-r1
256
+ ```
257
+ - vLLM:
258
+ ```shell
259
+ vllm serve Qwen/Qwen3-30B-A3B-Thinking-2507 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
260
+ ```
261
+
262
+ **Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.**
263
+
264
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
265
+
266
+ ## Agentic Use
267
+
268
+ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
269
+
270
+ To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
271
+ ```python
272
+ from qwen_agent.agents import Assistant
273
+
274
+ # Define LLM
275
+ # Using Alibaba Cloud Model Studio
276
+ llm_cfg = {
277
+ 'model': 'qwen3-30b-a3b-thinking-2507',
278
+ 'model_type': 'qwen_dashscope',
279
+ }
280
+
281
+ # Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
282
+ # functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
283
+ # `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-30B-A3B-Thinking-2507 --served-model-name Qwen3-30B-A3B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144`.
284
+ #
285
+ # llm_cfg = {
286
+ # 'model': 'Qwen3-30B-A3B-Thinking-2507',
287
+ #
288
+ # # Use a custom endpoint compatible with OpenAI API:
289
+ # 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
290
+ # 'api_key': 'EMPTY',
291
+ # 'generate_cfg': {
292
+ # 'thought_in_content': True,
293
+ # },
294
+ # }
295
+
296
+
297
+ # Define Tools
298
+ tools = [
299
+ {'mcpServers': { # You can specify the MCP configuration file
300
+ 'time': {
301
+ 'command': 'uvx',
302
+ 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
303
+ },
304
+ "fetch": {
305
+ "command": "uvx",
306
+ "args": ["mcp-server-fetch"]
307
+ }
308
+ }
309
+ },
310
+ 'code_interpreter', # Built-in tools
311
+ ]
312
+
313
+ # Define Agent
314
+ bot = Assistant(llm=llm_cfg, function_list=tools)
315
+
316
+ # Streaming generation
317
+ messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
318
+ for responses in bot.run(messages=messages):
319
+ pass
320
+ print(responses)
321
+ ```
322
+
323
+ ## Best Practices
324
+
325
+ To achieve optimal performance, we recommend the following settings:
326
+
327
+ 1. **Sampling Parameters**:
328
+ - We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
329
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
330
+
331
+ 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
332
+
333
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
334
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
335
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
336
+
337
+ 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
338
+
339
+
340
+ ### Citation
341
+
342
+ If you find our work helpful, feel free to give us a cite.
343
+
344
+ ```
345
+ @misc{qwen3technicalreport,
346
+ title={Qwen3 Technical Report},
347
+ author={Qwen Team},
348
+ year={2025},
349
+ eprint={2505.09388},
350
+ archivePrefix={arXiv},
351
+ primaryClass={cs.CL},
352
+ url={https://arxiv.org/abs/2505.09388},
353
+ }
354
+ ```