Qwen3-8B-GGUF

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support

Model Files

File Name Size Quantization Format Description
Qwen3_8B.F32.gguf 32.8 GB FP32 GGUF Full precision (float32) version
Qwen3_8B.BF16.gguf 16.4 GB BF16 GGUF BFloat16 precision version
Qwen3_8B.F16.gguf 16.4 GB FP16 GGUF Float16 precision version
Qwen3_8B.Q2_K.gguf 3.28 GB Q2_K GGUF 2-bit quantized (K variant)
Qwen3_8B.Q3_K_M.gguf 4.12 GB Q3_K_M GGUF 3-bit quantized (K M variant)
Qwen3_8B.Q3_K_S.gguf 3.77 GB Q3_K_S GGUF 3-bit quantized (K S variant)
Qwen3_8B.Q4_K_M.gguf 5.03 GB Q4_K_M GGUF 4-bit quantized (K M variant)
Qwen3_8B.Q4_K_S.gguf 4.8 GB Q4_K_S GGUF 4-bit quantized (K S variant)
Qwen3_8B.Q5_K_M.gguf 5.85 GB Q5_K_M GGUF 5-bit quantized (K M variant)
Qwen3_8B.Q8_0.gguf 8.71 GB Q8_0 GGUF 8-bit quantized
.gitattributes 2.08 kB β€” β€” Git LFS tracking file
config.json 31 B β€” β€” Configuration placeholder
README.md 31 B β€” β€” Model documentation

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 0.4
GGUF Q3_K_S 0.5
GGUF Q3_K_M 0.5 lower quality
GGUF Q3_K_L 0.5
GGUF IQ4_XS 0.6
GGUF Q4_K_S 0.6 fast, recommended
GGUF Q4_K_M 0.6 fast, recommended
GGUF Q5_K_S 0.6
GGUF Q5_K_M 0.7
GGUF Q6_K 0.7 very good quality
GGUF Q8_0 0.9 fast, best quality
GGUF f16 1.6 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
1,009
GGUF
Model size
8.19B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for prithivMLmods/Qwen3-8B-GGUF

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Quantized
(86)
this model

Collection including prithivMLmods/Qwen3-8B-GGUF