v2ray's picture
Added modeling code, fixed cache, and added prefill ability.
a9608c6
|
raw
history blame
1.35 kB
metadata
license: mit
language:
  - en
  - zh
base_model:
  - deepseek-ai/DeepSeek-V3-0324
pipeline_tag: text-generation
library_name: transformers

DeepSeek V3 0324 AWQ

AWQ of DeepSeek V3 0324.

Quantized by Eric Hartford and v2ray.

This quant modified some of the model code to fix an overflow issue when using float16.

To serve using vLLM with 8x 80GB GPUs, use the following command:

VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-chat --model cognitivecomputations/DeepSeek-V3-0324-AWQ

You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking here.

Inference speed with batch size 1 and short prompt:

  • 8x H100: 48 TPS
  • 8x A100: 38 TPS

Note:

  • Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
  • vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.