hzhwcmhf commited on
Commit
8022daf
·
verified ·
1 Parent(s): 15d0983

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -17,7 +17,7 @@ tags:
17
 
18
  ## Introduction
19
 
20
- QwQ is the reasoning model of Qwen series. Compared with conventional instruction-tuned models QwQ which is capable of thinking and reasoning can achieve significantly enhanced performance in downstream tasks espeically hard problems. QwQ-32B is the medium-size reasoning model, which is capable of achieving competitive performance against state-of-the art reasoning models, e.g., DeepSeek-R1, o1-mini.
21
 
22
  **This repo contains the AWQ-quantized 4-bit QwQ 32B model**, which has the following features:
23
  - Type: Causal Language Models
@@ -71,10 +71,6 @@ text = tokenizer.apply_chat_template(
71
  add_generation_prompt=True
72
  )
73
 
74
- # avoid empty thought content by forcing the model to start with "<think>\n"
75
- response_prefix = "<think>\n"
76
- text += response_prefix
77
-
78
  model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
79
 
80
  generated_ids = model.generate(
@@ -86,14 +82,14 @@ generated_ids = [
86
  ]
87
 
88
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
89
- print(response_prefix + response)
90
  ```
91
 
92
  ### Usage Guidelines
93
 
94
  To achieve optimal performance, we recommend the following settings:
95
 
96
- 1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality.
97
 
98
  2. **Sampling Parameters**:
99
  - Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions and enhance diversity.
 
17
 
18
  ## Introduction
19
 
20
+ QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
21
 
22
  **This repo contains the AWQ-quantized 4-bit QwQ 32B model**, which has the following features:
23
  - Type: Causal Language Models
 
71
  add_generation_prompt=True
72
  )
73
 
 
 
 
 
74
  model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
75
 
76
  generated_ids = model.generate(
 
82
  ]
83
 
84
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
85
+ print(response)
86
  ```
87
 
88
  ### Usage Guidelines
89
 
90
  To achieve optimal performance, we recommend the following settings:
91
 
92
+ 1. **Enforce Thoughtful Output**: Ensure the model starts with "\<think\>\n" to prevent generating empty thinking content, which can degrade output quality. If you use `apply_chat_template` and set `add_generation_prompt=True`, this is already automatically implemented, but it may cause the response to lack the \<think\> tag at the beginning. This is normal behavior.
93
 
94
  2. **Sampling Parameters**:
95
  - Use Temperature=0.6 and TopP=0.95 instead of Greedy decoding to avoid endless repetitions and enhance diversity.