Vulpecula-4B-GGUF

Vulpecula-4B is fine-tuned based on the traces of SK1.1, consisting of the same 1,000 entries of the DeepSeek thinking trajectory, along with fine-tuning on Fine-Tome 100k and Open Math Reasoning datasets. This specialized 4B parameter model is designed for enhanced mathematical reasoning, logical problem-solving, and structured content generation, optimized for precision and step-by-step explanation.

Model Files

File Name Size Quantization Format Description
Vulpecula-4B.F16.gguf 8.05 GB FP16 GGUF Float16 precision version
Vulpecula-4B.Q4_K_M.gguf 2.5 GB Q4_K_M GGUF 4-bit quantized (K M variant)
Vulpecula-4B.Q5_K_M.gguf 2.89 GB Q5_K_M GGUF 5-bit quantized (K M variant)
Vulpecula-4B.Q8_0.gguf 4.28 GB Q8_0 GGUF 8-bit quantized
.gitattributes 1.8 kB β€” β€” Git LFS tracking file
README.md 31 B β€” β€” Model documentation

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 0.4
GGUF Q3_K_S 0.5
GGUF Q3_K_M 0.5 lower quality
GGUF Q3_K_L 0.5
GGUF IQ4_XS 0.6
GGUF Q4_K_S 0.6 fast, recommended
GGUF Q4_K_M 0.6 fast, recommended
GGUF Q5_K_S 0.6
GGUF Q5_K_M 0.7
GGUF Q6_K 0.7 very good quality
GGUF Q8_0 0.9 fast, best quality
GGUF f16 1.6 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
24
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for prithivMLmods/Vulpecula-4B-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Quantized
(3)
this model

Collection including prithivMLmods/Vulpecula-4B-GGUF