|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- deepseek-ai/DeepSeek-V3 |
|
tags: |
|
- deepseek_v3 |
|
- bf16 |
|
- Safetensors |
|
- custom_code |
|
--- |
|
|
|
# huihui-ai/DeepSeek-V3-bf16 |
|
|
|
This model converted from DeepSeek-V3 to BF16. |
|
Here we simply provide the conversion command and related information about ollama. |
|
|
|
**The following conversion also applies to [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** |
|
|
|
If needed, we can upload the bf16 version. |
|
|
|
## FP8 to BF16 |
|
1. Download [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) model, requires approximately 641GB of space. |
|
``` |
|
cd /home/admin/models |
|
huggingface-cli download deepseek-ai/DeepSeek-V3 --local-dir ./deepseek-ai/DeepSeek-V3 |
|
``` |
|
2. Create the environment. |
|
``` |
|
conda create -yn DeepSeek-V3 python=3.12 |
|
conda activate DeepSeek-V3 |
|
pip install -r requirements.txt |
|
``` |
|
3. Convert to BF16, requires an additional approximately 1.3 TB of space. |
|
``` |
|
cd deepseek-ai/DeepSeek-V3/inference |
|
python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 |
|
``` |
|
## BF16 to gguf |
|
1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) (Download the latest version) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space. |
|
``` |
|
python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16 |
|
``` |
|
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled), |
|
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp). |
|
Convert first Q2_K, requires an additional approximately 227 GB of space. |
|
``` |
|
llama-quantize /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf Q2_K |
|
``` |
|
3. Use llama-cli to test, llama-cli needs to be compiled. |
|
``` |
|
llama-cli -m /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf -n 2048 |
|
``` |
|
|
|
## Use with ollama |
|
**Note:** this model requires [Ollama 0.5.5](https://github.com/ollama/ollama/releases/tag/v0.5.5) |
|
|
|
You can use [huihui_ai/deepseek-v3:671b-q2_K](https://ollama.com/huihui_ai/deepseek-v3:671b-q2_K) directly |
|
``` |
|
ollama run huihui_ai/deepseek-v3:671b-q2_K |
|
``` |
|
|
|
or [huihui_ai/deepseek-v3:671b-q3_K](https://ollama.com/huihui_ai/deepseek-v3:671b-q3_K) |
|
``` |
|
ollama run huihui_ai/deepseek-v3:671b-q3_K |
|
``` |
|
|