Update README.md
Browse files
README.md
CHANGED
@@ -38,8 +38,8 @@ python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepS
|
|
38 |
python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
|
39 |
```
|
40 |
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.),
|
41 |
-
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp).
|
42 |
-
Convert first Q2_K.
|
43 |
```
|
44 |
llama-quantize /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf Q2_K
|
45 |
```
|
|
|
38 |
python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
|
39 |
```
|
40 |
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled.),
|
41 |
+
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp).
|
42 |
+
Convert first Q2_K, requires an additional approximately 227 GB of space.
|
43 |
```
|
44 |
llama-quantize /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf Q2_K
|
45 |
```
|