File size: 919 Bytes
81faac0
 
 
 
e62fec1
 
30ff32d
39b6ede
aa99118
 
 
 
 
 
15528ed
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
base_model:
- deepseek-ai/DeepSeek-V3-Base
pipeline_tag: text-generation
---

Llama.cpp Quantized based on this [Llama.cpp MR](https://github.com/ggerganov/llama.cpp/pull/11049) big thanks to [fairydreaming](https://github.com/fairydreaming)!

The quantization has been performed on my BF16 version [DevQuasar/deepseek-ai.DeepSeek-V3-Base-bf16](https://huggingface.co/DevQuasar/deepseek-ai.DeepSeek-V3-Base-bf16)

Inference proof:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e6d37e02dee9bcb9d9fa18/PhHPBJMVXnWjIxBIbvx0g.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e6d37e02dee9bcb9d9fa18/6MSHSY7Gut2cyXYa0hhLP.png)


I'm doing this to 'Make knowledge free for everyone', using my personal time and resources.

If you want to support my efforts please visit my ko-fi page: https://ko-fi.com/devquasar 

Also feel free to visit my website https://devquasar.com/