lbourdois commited on
Commit
47a349f
·
verified ·
1 Parent(s): 95e917e

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +165 -153
README.md CHANGED
@@ -1,154 +1,166 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
4
- language:
5
- - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-32B
8
- tags:
9
- - chat
10
- library_name: transformers
11
- ---
12
-
13
- Weirdly uncensored. Though there's some quirks with the model like how it tends to do something like /think and then it will redo the post entirely. Fixable with some tricks but here.
14
-
15
- Only planning on doing the 4bpw for right now. Measurement file included so you can make your own quants.
16
-
17
- # QwQ-32B
18
-
19
- <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
20
- <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
21
- </a>
22
-
23
- ## Introduction
24
-
25
- QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
26
-
27
- <p align="center">
28
- <img width="100%" src="figures/benchmark.jpg">
29
- </p>
30
-
31
-
32
- **This repo contains the QwQ 32B model**, which has the following features:
33
- - Type: Causal Language Models
34
- - Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
35
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
36
- - Number of Parameters: 32.5B
37
- - Number of Paramaters (Non-Embedding): 31.0B
38
- - Number of Layers: 64
39
- - Number of Attention Heads (GQA): 40 for Q and 8 for KV
40
- - Context Length: Full 131,072 tokens
41
-
42
- **Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
43
-
44
- You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
45
-
46
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
47
-
48
- ## Requirements
49
-
50
- QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
51
-
52
- With `transformers<4.37.0`, you will encounter the following error:
53
- ```
54
- KeyError: 'qwen2'
55
- ```
56
-
57
- ## Quickstart
58
-
59
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
60
-
61
- ```python
62
- from transformers import AutoModelForCausalLM, AutoTokenizer
63
-
64
- model_name = "Qwen/QwQ-32B"
65
-
66
- model = AutoModelForCausalLM.from_pretrained(
67
- model_name,
68
- torch_dtype="auto",
69
- device_map="auto"
70
- )
71
- tokenizer = AutoTokenizer.from_pretrained(model_name)
72
-
73
- prompt = "How many r's are in the word \"strawberry\""
74
- messages = [
75
- {"role": "user", "content": prompt}
76
- ]
77
- text = tokenizer.apply_chat_template(
78
- messages,
79
- tokenize=False,
80
- add_generation_prompt=True
81
- )
82
-
83
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
84
-
85
- generated_ids = model.generate(
86
- **model_inputs,
87
- max_new_tokens=32768
88
- )
89
- generated_ids = [
90
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
91
- ]
92
-
93
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
94
- print(response)
95
- ```
96
-
97
- ### Usage Guidelines
98
-
99
- To achieve optimal performance, we recommend the following settings:
100
-
101
- 1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
102
-
103
- 2. **Sampling Parameters**:
104
- - Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions.
105
- - Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
106
-
107
- 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
108
- - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
109
- - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
110
-
111
- 4. **Handle Long Inputs**: For inputs exceeding 32,768 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
112
-
113
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
114
- ```json
115
- {
116
- ...,
117
- "rope_scaling": {
118
- "factor": 4.0,
119
- "original_max_position_embeddings": 32768,
120
- "type": "yarn"
121
- }
122
- }
123
- ```
124
-
125
- For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
126
- Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
127
- We advise adding the `rope_scaling` configuration only when processing long contexts is required.
128
-
129
- ## Evaluation & Performance
130
-
131
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwq-32b/).
132
-
133
- For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
134
-
135
- ## Citation
136
-
137
- If you find our work helpful, feel free to give us a cite.
138
-
139
- ```
140
- @misc{qwq32b,
141
- title = {QwQ-32B: The Power of Scaling RL},
142
- url = {https://qwenlm.github.io/blog/qwq-32b/},
143
- author = {Qwen Team},
144
- month = {March},
145
- year = {2025}
146
- }
147
-
148
- @article{qwen2,
149
- title={Qwen2 Technical Report},
150
- author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
151
- journal={arXiv preprint arXiv:2407.10671},
152
- year={2024}
153
- }
 
 
 
 
 
 
 
 
 
 
 
 
154
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-32B
20
+ tags:
21
+ - chat
22
+ library_name: transformers
23
+ ---
24
+
25
+ Weirdly uncensored. Though there's some quirks with the model like how it tends to do something like /think and then it will redo the post entirely. Fixable with some tricks but here.
26
+
27
+ Only planning on doing the 4bpw for right now. Measurement file included so you can make your own quants.
28
+
29
+ # QwQ-32B
30
+
31
+ <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
32
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
33
+ </a>
34
+
35
+ ## Introduction
36
+
37
+ QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
38
+
39
+ <p align="center">
40
+ <img width="100%" src="figures/benchmark.jpg">
41
+ </p>
42
+
43
+
44
+ **This repo contains the QwQ 32B model**, which has the following features:
45
+ - Type: Causal Language Models
46
+ - Training Stage: Pretraining & Post-training (Supervised Finetuning and Reinforcement Learning)
47
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
48
+ - Number of Parameters: 32.5B
49
+ - Number of Paramaters (Non-Embedding): 31.0B
50
+ - Number of Layers: 64
51
+ - Number of Attention Heads (GQA): 40 for Q and 8 for KV
52
+ - Context Length: Full 131,072 tokens
53
+
54
+ **Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
55
+
56
+ You can try our [demo](https://huggingface.co/spaces/Qwen/QwQ-32B-Demo) or access QwQ models via [QwenChat](https://chat.qwen.ai).
57
+
58
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwq-32b/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
59
+
60
+ ## Requirements
61
+
62
+ QwQ is based on Qwen2.5, whose code has been in the latest Hugging face `transformers`. We advise you to use the latest version of `transformers`.
63
+
64
+ With `transformers<4.37.0`, you will encounter the following error:
65
+ ```
66
+ KeyError: 'qwen2'
67
+ ```
68
+
69
+ ## Quickstart
70
+
71
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
72
+
73
+ ```python
74
+ from transformers import AutoModelForCausalLM, AutoTokenizer
75
+
76
+ model_name = "Qwen/QwQ-32B"
77
+
78
+ model = AutoModelForCausalLM.from_pretrained(
79
+ model_name,
80
+ torch_dtype="auto",
81
+ device_map="auto"
82
+ )
83
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
84
+
85
+ prompt = "How many r's are in the word \"strawberry\""
86
+ messages = [
87
+ {"role": "user", "content": prompt}
88
+ ]
89
+ text = tokenizer.apply_chat_template(
90
+ messages,
91
+ tokenize=False,
92
+ add_generation_prompt=True
93
+ )
94
+
95
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
96
+
97
+ generated_ids = model.generate(
98
+ **model_inputs,
99
+ max_new_tokens=32768
100
+ )
101
+ generated_ids = [
102
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
103
+ ]
104
+
105
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
106
+ print(response)
107
+ ```
108
+
109
+ ### Usage Guidelines
110
+
111
+ To achieve optimal performance, we recommend the following settings:
112
+
113
+ 1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
114
+
115
+ 2. **Sampling Parameters**:
116
+ - Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions.
117
+ - Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
118
+
119
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
120
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
121
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
122
+
123
+ 4. **Handle Long Inputs**: For inputs exceeding 32,768 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
124
+
125
+ For supported frameworks, you could add the following to `config.json` to enable YaRN:
126
+ ```json
127
+ {
128
+ ...,
129
+ "rope_scaling": {
130
+ "factor": 4.0,
131
+ "original_max_position_embeddings": 32768,
132
+ "type": "yarn"
133
+ }
134
+ }
135
+ ```
136
+
137
+ For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
138
+ Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
139
+ We advise adding the `rope_scaling` configuration only when processing long contexts is required.
140
+
141
+ ## Evaluation & Performance
142
+
143
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwq-32b/).
144
+
145
+ For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
146
+
147
+ ## Citation
148
+
149
+ If you find our work helpful, feel free to give us a cite.
150
+
151
+ ```
152
+ @misc{qwq32b,
153
+ title = {QwQ-32B: The Power of Scaling RL},
154
+ url = {https://qwenlm.github.io/blog/qwq-32b/},
155
+ author = {Qwen Team},
156
+ month = {March},
157
+ year = {2025}
158
+ }
159
+
160
+ @article{qwen2,
161
+ title={Qwen2 Technical Report},
162
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
163
+ journal={arXiv preprint arXiv:2407.10671},
164
+ year={2024}
165
+ }
166
  ```