Omega-Qwen2.5-Coder-3B-GGUF
Omega-Qwen2.5-Coder-3B is a compact and high-efficiency code-focused model fine-tuned on Qwen2.5-Coder-3B-Instruct, using the symbolic-rich Open-Omega-Forge-1M dataset. Designed specifically for hard-coded tasks and deterministic computation, this model runs in a "thinking-disabled" mode—delivering precise, structured outputs with minimal hallucination, making it ideal for rigorous coding workflows and embedded logic applications.
Model Files
File Name | Size | Precision |
---|---|---|
Omega-Qwen2.5-Coder-3B.BF16.gguf | 6.18 GB | BF16 |
Omega-Qwen2.5-Coder-3B.F16.gguf | 6.18 GB | F16 |
Omega-Qwen2.5-Coder-3B.F32.gguf | 12.3 GB | F32 |
Omega-Qwen2.5-Coder-3B.Q2_K.gguf | 1.27 GB | Q2_K |
Omega-Qwen2.5-Coder-3B.Q3_K_L.gguf | 1.71 GB | Q3_K_L |
Omega-Qwen2.5-Coder-3B.Q3_K_M.gguf | 1.59 GB | Q3_K_M |
Omega-Qwen2.5-Coder-3B.Q3_K_S.gguf | 1.45 GB | Q3_K_S |
Omega-Qwen2.5-Coder-3B.Q4_K_M.gguf | 1.93 GB | Q4_K_M |
Omega-Qwen2.5-Coder-3B.Q4_K_S.gguf | 1.83 GB | Q4_K_S |
Omega-Qwen2.5-Coder-3B.Q5_K_M.gguf | 2.22 GB | Q5_K_M |
Omega-Qwen2.5-Coder-3B.Q5_K_S.gguf | 2.17 GB | Q5_K_S |
Omega-Qwen2.5-Coder-3B.Q6_K.gguf | 2.54 GB | Q6_K |
Omega-Qwen2.5-Coder-3B.Q8_0.gguf | 3.29 GB | Q8_0 |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 118
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for prithivMLmods/Omega-Qwen2.5-Coder-3B-GGUF
Base model
Qwen/Qwen2.5-3B