File size: 1,349 Bytes
0b696dd
 
 
 
 
 
 
 
 
 
 
 
 
a9608c6
0b696dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: mit
language:
- en
- zh
base_model:
- deepseek-ai/DeepSeek-V3-0324
pipeline_tag: text-generation
library_name: transformers
---
# DeepSeek V3 0324 AWQ
AWQ of DeepSeek V3 0324.

Quantized by [Eric Hartford](https://huggingface.co/ehartford) and [v2ray](https://huggingface.co/v2ray).

This quant modified some of the model code to fix an overflow issue when using float16.

To serve using vLLM with 8x 80GB GPUs, use the following command:
```sh
VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-chat --model cognitivecomputations/DeepSeek-V3-0324-AWQ
```
You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.7.3.dev187%2Bg0ff1a4df.d20220101.cu126-cp312-cp312-linux_x86_64.whl).

Inference speed with batch size 1 and short prompt:
- 8x H100: 48 TPS
- 8x A100: 38 TPS

Note:
- Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
- vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.