What is the difference between Qwen/Qwen3-32B-FP8 and this quatinized model?

#1
by traphix - opened

Any difference? Qwen/Qwen3-32B-FP8

big thanks for this quantization - for whatever reason i was unable to run the FP8 version provided by qwen (was crashing with

ValueError("type fp8e4nv not supported in this architecture. The supported fp8 dtypes are ('fp8e4b15', 'fp8e5')")

However this one runs great in vLLM.

big thanks for this quantization - for whatever reason i was unable to run the FP8 version provided by qwen (was crashing with

ValueError("type fp8e4nv not supported in this architecture. The supported fp8 dtypes are ('fp8e4b15', 'fp8e5')")

However this one runs great in vLLM.

I have the same problem running on A800.

This one and FP8 both works on 4090, but this one is much faster than Qwen/Qwen3-32B-FP8.
This one can use some fast kernels with vllm or sglang.

Red Hat AI org

This model was produced using llm-compressor and is compatible with fast vLLM kernels. It uses dynamic per-token quantization for activations and static per-channel quantization for weights. After inspecting Qwen/Qwen3-32B-FP8 it seems they use grouped quantization, which will lead to slightly different results. I am able to run with vLLM on H100, but I'm not 100% about support in other hardware. The overall accuracy is similar based on some benchmarks we ran.

alexmarques changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment