metadata
license: apache-2.0
language:
- en
base_model:
- prithivMLmods/Cerium-Qwen3-R1-Dev
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- math
- r1
Cerium-Qwen3-R1-Dev-GGUF
Cerium-Qwen3-R1-Dev is a high-efficiency, multi-domain model fine-tuned on Qwen-0.6B using the rStar-Coder dataset, enhanced with code expert clusters, an extended open code reasoning dataset, and DeepSeek R1 coding sample traces. This model blends symbolic precision, scientific logic, and structured output fluency—making it an ideal tool for developers, educators, and researchers seeking advanced reasoning under constrained compute.
Model Files
File Name | Quant Type | File Size |
---|---|---|
Cerium-Qwen3-R1-Dev.BF16.gguf | BF16 | 1.2 GB |
Cerium-Qwen3-R1-Dev.F16.gguf | F16 | 1.2 GB |
Cerium-Qwen3-R1-Dev.F32.gguf | F32 | 2.39 GB |
Cerium-Qwen3-R1-Dev.Q2_K.gguf | Q2_K | 296 MB |
Cerium-Qwen3-R1-Dev.Q3_K_L.gguf | Q3_K_L | 368 MB |
Cerium-Qwen3-R1-Dev.Q3_K_M.gguf | Q3_K_M | 347 MB |
Cerium-Qwen3-R1-Dev.Q3_K_S.gguf | Q3_K_S | 323 MB |
Cerium-Qwen3-R1-Dev.Q4_K_M.gguf | Q4_K_M | 397 MB |
Cerium-Qwen3-R1-Dev.Q4_K_S.gguf | Q4_K_S | 383 MB |
Cerium-Qwen3-R1-Dev.Q5_K_M.gguf | Q5_K_M | 444 MB |
Cerium-Qwen3-R1-Dev.Q5_K_S.gguf | Q5_K_S | 437 MB |
Cerium-Qwen3-R1-Dev.Q6_K.gguf | Q6_K | 495 MB |
Cerium-Qwen3-R1-Dev.Q8_0.gguf | Q8_0 | 639 MB |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):