danielhanchen commited on
Commit
d3221d0
·
verified ·
1 Parent(s): b70fc0b

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +31 -69
README.md CHANGED
@@ -1,58 +1,16 @@
1
  ---
2
- base_model: Qwen/QwQ-32B
3
  license: apache-2.0
4
  license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
5
  language:
6
  - en
7
  pipeline_tag: text-generation
 
 
8
  tags:
9
  - chat
10
- - qwen
 
11
  ---
12
- <div>
13
- <p style="margin-bottom: 0; margin-top: 0;">
14
- <strong>This is Qwen-QwQ-32B with our bug fixes. <br> See <a href="https://huggingface.co/collections/unsloth/qwen-qwq-32b-collection-676b3b29c20c09a8c71a6235">our collection</a> for versions of QwQ-32B with our bug fixes including GGUF & 4-bit formats.</strong>
15
- </p>
16
- <p style="margin-bottom: 0;">
17
- <em>Unsloth's QwQ-32B <a href="https://unsloth.ai/blog/dynamic-4bit">Dynamic Quants</a> is selectively quantized, greatly improving accuracy over standard 4-bit.</em>
18
- </p>
19
- <div style="display: flex; gap: 5px; align-items: center; ">
20
- <a href="https://github.com/unslothai/unsloth/">
21
- <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
22
- </a>
23
- <a href="https://discord.gg/unsloth">
24
- <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
25
- </a>
26
- <a href="https://docs.unsloth.ai/">
27
- <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
28
- </a>
29
- </div>
30
- <h1 style="margin-top: 0rem;">Finetune your own Reasoning model like R1 with Unsloth!</h2>
31
- </div>
32
-
33
- We have a free Google Colab notebook for turning Qwen2.5 (3B) into a reasoning model: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(3B)-GRPO.ipynb
34
-
35
-
36
- ## ✨ Finetune for Free
37
-
38
- All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
39
-
40
- | Unsloth supports | Free Notebooks | Performance | Memory use |
41
- |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
42
- | **GRPO with Phi-4** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 2x faster | 80% less |
43
- | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
44
- | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
45
- | **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
46
- | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
47
- | **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
48
- | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
49
- | **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
50
- | **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
51
-
52
- - This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
53
- - This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
54
- - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
55
-
56
 
57
  # QwQ-32B
58
 
@@ -78,6 +36,7 @@ QwQ is the reasoning model of the Qwen series. Compared with conventional instru
78
  - Number of Layers: 64
79
  - Number of Attention Heads (GQA): 40 for Q and 8 for KV
80
  - Context Length: Full 131,072 tokens
 
81
 
82
  **Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
83
 
@@ -141,30 +100,33 @@ To achieve optimal performance, we recommend the following settings:
141
  1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
142
 
143
  2. **Sampling Parameters**:
144
- - Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions.
145
  - Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
 
 
 
146
 
147
- 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
148
  - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
149
  - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
150
 
151
- 4. **Handle Long Inputs**: For inputs exceeding 32,768 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
152
-
153
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
154
- ```json
155
- {
156
- ...,
157
- "rope_scaling": {
158
- "factor": 4.0,
159
- "original_max_position_embeddings": 32768,
160
- "type": "yarn"
161
- }
162
- }
163
- ```
164
 
165
- For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
166
- Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
167
- We advise adding the `rope_scaling` configuration only when processing long contexts is required.
168
 
169
  ## Evaluation & Performance
170
 
@@ -178,17 +140,17 @@ If you find our work helpful, feel free to give us a cite.
178
 
179
  ```
180
  @misc{qwq32b,
181
- title = {QwQ-32B: The Power of Scaling RL},
182
  url = {https://qwenlm.github.io/blog/qwq-32b/},
183
  author = {Qwen Team},
184
  month = {March},
185
  year = {2025}
186
  }
187
 
188
- @article{qwen2,
189
- title={Qwen2 Technical Report},
190
- author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
191
- journal={arXiv preprint arXiv:2407.10671},
192
  year={2024}
193
  }
194
  ```
 
1
  ---
 
2
  license: apache-2.0
3
  license_link: https://huggingface.co/Qwen/QWQ-32B/blob/main/LICENSE
4
  language:
5
  - en
6
  pipeline_tag: text-generation
7
+ base_model:
8
+ - Qwen/QwQ-32B
9
  tags:
10
  - chat
11
+ - unsloth
12
+ library_name: transformers
13
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  # QwQ-32B
16
 
 
36
  - Number of Layers: 64
37
  - Number of Attention Heads (GQA): 40 for Q and 8 for KV
38
  - Context Length: Full 131,072 tokens
39
+ - For prompts exceeding 8,192 tokens in length, you must enable YaRN as outlined in [this section](#usage-guidelines).
40
 
41
  **Note:** For the best experience, please review the [usage guidelines](#usage-guidelines) before deploying QwQ models.
42
 
 
100
  1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
101
 
102
  2. **Sampling Parameters**:
103
+ - Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions.
104
  - Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output.
105
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
106
+
107
+ 3. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in `apply_chat_template`.
108
 
109
+ 4. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
110
  - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
111
  - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
112
 
113
+ 5. **Handle Long Inputs**: For inputs exceeding 8,192 tokens, enable [YaRN](https://arxiv.org/abs/2309.00071) to improve the model's ability to capture long-sequence information effectively.
114
+
115
+ For supported frameworks, you could add the following to `config.json` to enable YaRN:
116
+ ```json
117
+ {
118
+ ...,
119
+ "rope_scaling": {
120
+ "factor": 4.0,
121
+ "original_max_position_embeddings": 32768,
122
+ "type": "yarn"
123
+ }
124
+ }
125
+ ```
126
 
127
+ For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
128
+ Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
129
+ We advise adding the `rope_scaling` configuration only when processing long contexts is required.
130
 
131
  ## Evaluation & Performance
132
 
 
140
 
141
  ```
142
  @misc{qwq32b,
143
+ title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
144
  url = {https://qwenlm.github.io/blog/qwq-32b/},
145
  author = {Qwen Team},
146
  month = {March},
147
  year = {2025}
148
  }
149
 
150
+ @article{qwen2.5,
151
+ title={Qwen2.5 Technical Report},
152
+ author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
153
+ journal={arXiv preprint arXiv:2412.15115},
154
  year={2024}
155
  }
156
  ```