ehartford commited on
Commit
0b696dd
·
verified ·
1 Parent(s): 3dd4fbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -1 +1,30 @@
1
- asda
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - zh
6
+ base_model:
7
+ - deepseek-ai/DeepSeek-V3-0324
8
+ pipeline_tag: text-generation
9
+ library_name: transformers
10
+ ---
11
+ # DeepSeek V3 0324 AWQ
12
+ AWQ of DeepSeek V3 0324.
13
+
14
+ Quantized by [Eric Hartford](https://huggingface.co/ehartford) and [v2ray](https://huggingface.co/v2ray)
15
+
16
+ This quant modified some of the model code to fix an overflow issue when using float16.
17
+
18
+ To serve using vLLM with 8x 80GB GPUs, use the following command:
19
+ ```sh
20
+ VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-chat --model cognitivecomputations/DeepSeek-V3-0324-AWQ
21
+ ```
22
+ You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.7.3.dev187%2Bg0ff1a4df.d20220101.cu126-cp312-cp312-linux_x86_64.whl).
23
+
24
+ Inference speed with batch size 1 and short prompt:
25
+ - 8x H100: 48 TPS
26
+ - 8x A100: 38 TPS
27
+
28
+ Note:
29
+ - Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization.
30
+ - vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs.