Brianpuz/Qwen2-0.5B-Instruct-Q4_K_M-GGUF

Absolutely tremendous! This repo features GGUF quantized versions of Qwen/Qwen2-0.5B-Instruct โ€” made possible using the very powerful llama.cpp. Believe me, it's fast, it's smart, it's winning.

Quantized Versions:

Only the best quantization. Youโ€™ll love it.

Run with llama.cpp

Just plug it in, hit the command line, and boom โ€” you're running world-class AI, folks:

llama-cli --hf-repo Brianpuz/Qwen2-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2-0.5b-instruct-q4_k_m.gguf -p "AI First, but also..."

This beautiful Hugging Face Space was brought to you by the amazing team at Antigma Labs. Great people. Big vision. Doing things that matter โ€” and doing them right. Total winners.

Downloads last month
278
GGUF
Model size
494M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Brianpuz/Qwen2-0.5B-Instruct-Q4_K_M-GGUF

Base model

Qwen/Qwen2-0.5B
Quantized
(57)
this model