update vllm to 0.8.x and meet some trouble
#34
by
HuggingLianWang
- opened
I'm using the recommend script to run this model and it works fine on vllm 0.7.x
Recently, I update vllm to 0.8.x version and this quant model cannot run using the same script.
The error log shows:
File "/root/miniconda3/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 620, in <lambda>
lambda prefix: DeepseekV2DecoderLayer(
^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 508, in __init__
self.self_attn = attn_cls(
^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/vllm/model_executor/models/deepseek_v2.py", line 440, in __init__
self.mla_attn = Attention(
^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/vllm/attention/layer.py", line 134, in __init__
self.impl = impl_cls(num_heads, head_size, scale, num_kv_heads,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.11/site-packages/vllm/attention/backends/triton_mla.py", line 63, in __init__
raise NotImplementedError(
NotImplementedError: TritonMLA with FP8 KV cache not yet supported
And my script is like
python -u -m vllm.entrypoints.openai.api_server \
--host=0.0.0.0 \
--port=9024 \
--model=./open_source_models/DeepSeek-R1-awq/ \
--tokenizer=./open_source_models/DeepSeek-R1-awq/ \
--served-model-name=DeepSeek-R1-awq \
--gpu-memory-utilization=0.97 \
--quantization moe_wna16 \
--kv-cache-dtype fp8_e5m2 \
--calculate-kv-scales \
--enable-reasoning \
--reasoning-parser deepseek_r1\
--tensor-parallel-size=8 \
--max-num-seqs=10 \
--max_model_len=16384 \
--max-seq-len-to-capture=16384 \
--trust-remote-code \
--max-log-len=200
MLA doesn't support FP8 yet, don't use it.
MLA doesn't support FP8 yet, don't use it.
I removed "kv-cache-dtype" && "calculate-kv-scales". Then it works fine. Thx~
--kv-cache-dtype fp8_e5m2 \
--calculate-kv-scales \
But I'm concerned if this will affect performance?
No, because it doesn't support it in the first place, I don't know what you were doing to get it working on 0.7.x.