metadata
license: apache-2.0
base_model:
- prithivMLmods/Draco-CoderMini-3B
pipeline_tag: text-generation
tags:
- text-generation-inference
- math
- problem-solve
- code
language:
- en
library_name: transformers
Draco-CoderMini-3B-GGUF
Draco-CoderMini-3B is a compact, coding-optimized language model built on the Qwen2 architecture, tailored for high-accuracy code generation, debugging, and technical reasoning. With 3 billion parameters, it strikes a balance between power and deployability, making it an ideal assistant for developers, educators, and engineers working in constrained environments or requiring fast inference.
Model File
File Name | Size | Format |
---|---|---|
Draco-CoderMini-3B.BF16.gguf | 6.18 GB | BF16 |
Draco-CoderMini-3B.F16.gguf | 6.18 GB | F16 |
Draco-CoderMini-3B.F32.gguf | 12.3 GB | F32 |
.gitattributes | 1.75 kB | - |
README.md | 210 B | - |
config.json | 31 B | JSON |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):