Brianpuz/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-Q4_K_M-GGUF

Absolutely tremendous! This repo features GGUF quantized versions of alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B — made possible using the very powerful llama.cpp. Believe me, it's fast, it's smart, it's winning.

Quantized Versions:

Only the best quantization. You’ll love it.

Run with llama.cpp

Just plug it in, hit the command line, and boom — you're running world-class AI, folks:

llama-cli --hf-repo Brianpuz/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-Q4_K_M-GGUF --hf-file deepseek-r1-draft-qwen2.5-0.5b-q4_k_m.gguf -p "AI First, but also..."

This beautiful Hugging Face Space was brought to you by the amazing team at Antigma Labs. Great people. Big vision. Doing things that matter — and doing them right. Total winners.

Downloads last month
50
GGUF
Model size
494M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Brianpuz/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-Q4_K_M-GGUF

Base model

Qwen/Qwen2.5-0.5B
Quantized
(7)
this model

Dataset used to train Brianpuz/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-Q4_K_M-GGUF