OPEA
/

Safetensors
qwen2
4-bit precision
awq
cicdatopea commited on
Commit
dda665f
·
verified ·
1 Parent(s): 130ee55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -20
README.md CHANGED
@@ -1,10 +1,3 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - NeelNanda/pile-10k
5
- base_model:
6
- - Qwen/QwQ-32B
7
- ---
8
  ## Model Details
9
 
10
  This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
@@ -174,19 +167,19 @@ pip3 install lm-eval==0.4.7
174
  auto-round --model "OPEA/QwQ-32B-int4-AutoRound-awq-asym" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu
175
  ```
176
 
177
- | Metric | BF16(lm-eval 0.4.5) | INT4 |
178
- | -------------- | ------------------- | ---- |
179
- | Avg | 0.6600 | |
180
- | arc_challenge | 0.5392 | |
181
- | arc_easy | 0.8089 | |
182
- | boolq | 0.8645 | |
183
- | hellaswag | 0.6520 | |
184
- | lambada_openai | 0.6697 | |
185
- | mmlu | 0.7982 | |
186
- | openbookqa | 0.3540 | |
187
- | piqa | 0.7947 | |
188
- | truthfulqa_mc1 | 0.4211 | |
189
- | winorgrande | 0.6977 | |
190
 
191
  ### Generate the model
192
 
 
 
 
 
 
 
 
 
1
  ## Model Details
2
 
3
  This model is an int4 model with group_size 128 and symmetric quantization of [Qwen/QwQ-32B](https://huggingface.co/Qwen/QwQ-32B) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
 
167
  auto-round --model "OPEA/QwQ-32B-int4-AutoRound-awq-asym" --eval --eval_bs 16 --tasks lambada_openai,hellaswag,piqa,winogrande,truthfulqa_mc1,openbookqa,boolq,arc_easy,arc_challenge,mmlu
168
  ```
169
 
170
+ | Metric | BF16(lm-eval 0.4.5) | INT4 |
171
+ | -------------- | ------------------- | ------ |
172
+ | Avg | 0.6600 | 0.6537 |
173
+ | arc_challenge | 0.5392 | 0.5401 |
174
+ | arc_easy | 0.8089 | 0.8085 |
175
+ | boolq | 0.8645 | 0.8425 |
176
+ | hellaswag | 0.6520 | 0.6461 |
177
+ | lambada_openai | 0.6697 | 0.6695 |
178
+ | mmlu | 0.7982 | 0.7953 |
179
+ | openbookqa | 0.3540 | 0.3140 |
180
+ | piqa | 0.7947 | 0.8058 |
181
+ | truthfulqa_mc1 | 0.4211 | 0.4272 |
182
+ | winorgrande | 0.6977 | 0.6882 |
183
 
184
  ### Generate the model
185