Qwen3-4B-Valiant-Polaris-f32-GGUF

ZeroXClem/Qwen-4B-Valiant-Polaris is a thoughtfully merged 4B-parameter language model built upon Qwen3-4B, combining the structured reasoning of Polaris, the creative and expressive capabilities of Dot-Goat and RP-V3, and the scientific depth of ShiningValiant3, resulting in a lightweight yet powerful architecture designed for advanced reasoning, rich roleplay, scientific and analytical tasks, and seamless agentic workflows; with robust support for long contexts, multilingual reasoning, and tool integration, it is ideal for conversational agents, tutoring, problem solving, creative writing, and autonomous agent applications.

Model Files

File name Size Quant type
Qwen3-4B-Valiant-Polaris.BF16.gguf 8.05 GB BF16
Qwen3-4B-Valiant-Polaris.F16.gguf 8.05 GB F16
Qwen3-4B-Valiant-Polaris.F32.gguf 16.1 GB F32
Qwen3-4B-Valiant-Polaris.Q2_K.gguf 1.67 GB Q2_K
Qwen3-4B-Valiant-Polaris.Q3_K_L.gguf 2.24 GB Q3_K_L
Qwen3-4B-Valiant-Polaris.Q3_K_M.gguf 2.08 GB Q3_K_M
Qwen3-4B-Valiant-Polaris.Q3_K_S.gguf 1.89 GB Q3_K_S
Qwen3-4B-Valiant-Polaris.Q4_K_M.gguf 2.5 GB Q4_K_M
Qwen3-4B-Valiant-Polaris.Q4_K_S.gguf 2.38 GB Q4_K_S
Qwen3-4B-Valiant-Polaris.Q5_K_M.gguf 2.89 GB Q5_K_M
Qwen3-4B-Valiant-Polaris.Q5_K_S.gguf 2.82 GB Q5_K_S
Qwen3-4B-Valiant-Polaris.Q6_K.gguf 3.31 GB Q6_K
Qwen3-4B-Valiant-Polaris.Q8_0.gguf 4.28 GB Q8_0

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
472
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Qwen3-4B-Valiant-Polaris-f32-GGUF

Quantized
(4)
this model