New 8B model much slower than old 7B model when running on vLLM.
Did anybody else notice that this model runs much slower on vLLM than the old 7B-1M model?
I'm using vLLM, on 4 GPUs (Ampere), weight-only FP8 compression using the Marlin kernel, 'enable_thinking=False', and tried with batchsize 1 and batchsize 4 with similar results.
Shown below is the performance reported by vLLM when summarizing a 28k token document. I ran each task twice. I presume the second run is always much faster because of some prefix caching. As you can see, the new model takes almost 2x longer on the first run. On the second run, with prefix caching likely cutting time dramatically, it still takes 50% longer.
Old model: Qwen2.5-7b-1M ----------------------------------------------------------------
[00:13<00:00, 13.75s/it, est. speed input: 2063.93 toks/s, output: 14.69 toks/s] (first run)
[00:02<00:00, 2.23s/it, est. speed input: 12732.36 toks/s, output: 87.04 toks/s] (second run)
New model: Qwen3-8b: ----------------------------------------------------------------
[00:20<00:00, 20.77s/it, est. speed input: 1366.44 toks/s, output: 8.62 toks/s] (first run)
[00:02<00:00, 2.94s/it, est. speed input: 9656.35 toks/s, output: 60.90 toks/s] (second run)
As I reported elsewhere, even the 4B model is slower than the old 7B model.
How can that be ??
Hi! I am working with huggingface transformers and it came to my attention that Qwen2.5 utilized flash attention and now I'm testing Qwen3 without flash attention. This could be related to your question.