huihui-ai commited on
Commit
2fcc4ff
·
verified ·
1 Parent(s): 8d48745

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ cd deepseek-ai/DeepSeek-V3/inference
33
  python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16
34
  ```
35
  ## BF16 to f16.gguf
36
- 1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) (Download the latest version.) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
37
  ```
38
  python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
39
  ```
 
33
  python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16
34
  ```
35
  ## BF16 to f16.gguf
36
+ 1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) (Download the latest version) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
37
  ```
38
  python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
39
  ```