poor performance for DeepSeek-V3-AWQ

#9
by fridayl - opened

image.png
unrelated content will generate
use command
python3 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 9000 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-chat --model /models/DeepSeek-V3-awq
temperature=0.6, top_p=0.67

Sign up or log in to comment