# Brianpuz/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-Q3_K_S-GGUF
This repo contains GGUF quantized versions of [`deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B`](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using llama.cpp.
## Quantized Versions:
- `deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf`
deepseek-r1-distill-qwen-1.5b-q3_k_s.gguf
## Run with llama.cpp ``` llama-cli --hf-repo Brianpuz/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-Q3_K_S-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-q4_k_m.gguf -p "The meaning of life is" ``` (Replace filename to use other variants.)
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
3-bit
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for Brianpuz/DeepSeek-R1-Distill-Qwen-1.5B-Q4_K_M-Q3_K_S-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B