|
--- |
|
base_model: |
|
- deepseek-ai/DeepSeek-V3-Base |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
Llama.cpp Quantized based on this [Llama.cpp MR](https://github.com/ggerganov/llama.cpp/pull/11049) big thanks to [fairydreaming](https://github.com/fairydreaming)! |
|
|
|
The quantization has been performed on my BF16 version [DevQuasar/deepseek-ai.DeepSeek-V3-Base-bf16](https://huggingface.co/DevQuasar/deepseek-ai.DeepSeek-V3-Base-bf16) |
|
|
|
Inference proof: |
|
|
|
 |
|
 |
|
|