Thank you

#1
by davidsyoung - opened

I wanted to say thank you, as the group size of 128 on this quant allows me to run it on 16x3090 with 16k ctx, which is awesome.

Did you ever test 256?

Thanks for your feedback. I have not conveted and tested the version with group size of 256, you can convert it step by step: DeepSeek-R1(fp8) -> BF16 -> AWQ(AutoAWQ, group_size=256)

Let me konw if you have any questions about it.

Can you tell me which GPUs and how many GPUs you used when performing AWQ quantization on DeepSeek-R1?

Can you tell me which GPUs and how many GPUs you used when performing AWQ quantization on DeepSeek-R1?

8xB200 (180GB), you might want to try different types of GPUs.

wanzhenchn changed discussion status to closed

Sign up or log in to comment