File size: 2,600 Bytes
659a0a6 65524e5 3b086ce 659a0a6 424e1b8 a2317e2 94be240 a2317e2 ffa2eed 7c6d913 2fcc4ff ffa2eed 2d757a9 f4241d7 ffa2eed 2d757a9 70fe8b6 ffa2eed 9e76422 206125a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
base_model:
- deepseek-ai/DeepSeek-V3
tags:
- deepseek_v3
- bf16
- Safetensors
- custom_code
---
# huihui-ai/DeepSeek-V3-bf16
This model converted from DeepSeek-V3 to BF16.
Here we simply provide the conversion command and related information about ollama.
**The following conversion also applies to [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)**
If needed, we can upload the bf16 version.
## FP8 to BF16
1. Download [deepseek-ai/DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) model, requires approximately 641GB of space.
```
cd /home/admin/models
huggingface-cli download deepseek-ai/DeepSeek-V3 --local-dir ./deepseek-ai/DeepSeek-V3
```
2. Create the environment.
```
conda create -yn DeepSeek-V3 python=3.12
conda activate DeepSeek-V3
pip install -r requirements.txt
```
3. Convert to BF16, requires an additional approximately 1.3 TB of space.
```
cd deepseek-ai/DeepSeek-V3/inference
python fp8_cast_bf16.py --input-fp8-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3/ --output-bf16-hf-path /home/admin/models/deepseek-ai/DeepSeek-V3-bf16
```
## BF16 to gguf
1. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) (Download the latest version) conversion program to convert DeepSeek-V3-bf16 to gguf format, requires an additional approximately 1.3 TB of space.
```
python convert_hf_to_gguf.py /home/admin/models/deepseek-ai/DeepSeek-V3-bf16 --outfile /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf --outtype f16
```
2. Use the [llama.cpp](https://github.com/ggerganov/llama.cpp) quantitative program to quantitative model (llama-quantize needs to be compiled),
other [quant option](https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp).
Convert first Q2_K, requires an additional approximately 227 GB of space.
```
llama-quantize /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-f16.gguf /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf Q2_K
```
3. Use llama-cli to test, llama-cli needs to be compiled.
```
llama-cli -m /home/admin/models/deepseek-ai/DeepSeek-V3-bf16/ggml-model-Q2_K.gguf -n 2048
```
## Use with ollama
**Note:** this model requires [Ollama 0.5.5](https://github.com/ollama/ollama/releases/tag/v0.5.5)
You can use [huihui_ai/deepseek-v3:671b-q2_K](https://ollama.com/huihui_ai/deepseek-v3:671b-q2_K) directly
```
ollama run huihui_ai/deepseek-v3:671b-q2_K
```
or [huihui_ai/deepseek-v3:671b-q3_K](https://ollama.com/huihui_ai/deepseek-v3:671b-q3_K)
```
ollama run huihui_ai/deepseek-v3:671b-q3_K
```
|