huihui-ai commited on
Commit
7c6d913
·
verified ·
1 Parent(s): 2fcc4ff

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -32,7 +32,7 @@ pip install -r requirements.txt
32
  cd deepseek-ai/DeepSeek-V3/inference
33
  python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16
34
  ```
35
- ## BF16 to f16.gguf
36
  1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) (Download the latest version) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
37
  ```
38
  python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
 
32
  cd deepseek-ai/DeepSeek-V3/inference
33
  python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16
34
  ```
35
+ ## BF16 to gguf
36
  1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) (Download the latest version) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
37
  ```
38
  python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16