Text Generation
Transformers
GGUF
English
imatrix
conversational

DeepCoder-14B-Preview GGUF Models

Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)

Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.

Benchmark Context

All tests conducted on Llama-3-8B-Instruct using:

  • Standard perplexity evaluation pipeline
  • 2048-token context window
  • Same prompt set across all quantizations

Method

  • Dynamic Precision Allocation:
    • First/Last 25% of layers → IQ4_XS (selected layers)
    • Middle 50% → IQ2_XXS/IQ3_S (increase efficiency)
  • Critical Component Protection:
    • Embeddings/output layers use Q5_K
    • Reduces error propagation by 38% vs standard 1-2bit

Quantization Performance Comparison (Llama-3-8B)

Quantization Standard PPL DynamicGate PPL Δ PPL Std Size DG Size Δ Size Std Speed DG Speed
IQ2_XXS 11.30 9.84 -12.9% 2.5G 2.6G +0.1G 234s 246s
IQ2_XS 11.72 11.63 -0.8% 2.7G 2.8G +0.1G 242s 246s
IQ2_S 14.31 9.02 -36.9% 2.7G 2.9G +0.2G 238s 244s
IQ1_M 27.46 15.41 -43.9% 2.2G 2.5G +0.3G 206s 212s
IQ1_S 53.07 32.00 -39.7% 2.1G 2.4G +0.3G 184s 209s

Key:

  • PPL = Perplexity (lower is better)
  • Δ PPL = Percentage change from standard to DynamicGate
  • Speed = Inference time (CPU avx2, 2048 token context)
  • Size differences reflect mixed quantization overhead

Key Improvements:

  • 🔥 IQ1_M shows massive 43.9% perplexity reduction (27.46 → 15.41)
  • 🚀 IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
  • IQ1_S maintains 39.7% better accuracy despite 1-bit quantization

Tradeoffs:

  • All variants have modest size increases (0.1-0.3GB)
  • Inference speeds remain comparable (<5% difference)

When to Use These Models

📌 Fitting models into GPU VRAM

Memory-constrained deployments

Cpu and Edge Devices where 1-2bit errors can be tolerated

Research into ultra-low-bit quantization

Choosing the Right Model Format

Selecting the correct model format depends on your hardware capabilities and memory constraints.

BF16 (Brain Float 16) – Use if BF16 acceleration is available

  • A 16-bit floating-point format designed for faster computation while retaining good precision.
  • Provides similar dynamic range as FP32 but with lower memory usage.
  • Recommended if your hardware supports BF16 acceleration (check your device's specs).
  • Ideal for high-performance inference with reduced memory footprint compared to FP32.

📌 Use BF16 if:
✔ Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
✔ You want higher precision while saving memory.
✔ You plan to requantize the model into another format.

📌 Avoid BF16 if:
❌ Your hardware does not support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.


F16 (Float 16) – More widely supported than BF16

  • A 16-bit floating-point high precision but with less of range of values than BF16.
  • Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
  • Slightly lower numerical precision than BF16 but generally sufficient for inference.

📌 Use F16 if:
✔ Your hardware supports FP16 but not BF16.
✔ You need a balance between speed, memory usage, and accuracy.
✔ You are running on a GPU or another device optimized for FP16 computations.

📌 Avoid F16 if:
❌ Your device lacks native FP16 support (it may run slower than expected).
❌ You have memory limitations.


Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference

Quantization reduces model size and memory usage while maintaining as much accuracy as possible.

  • Lower-bit models (Q4_K)Best for minimal memory usage, may have lower precision.
  • Higher-bit models (Q6_K, Q8_0)Better accuracy, requires more memory.

📌 Use Quantized Models if:
✔ You are running inference on a CPU and need an optimized model.
✔ Your device has low VRAM and cannot load full-precision models.
✔ You want to reduce memory footprint while keeping reasonable accuracy.

📌 Avoid Quantized Models if:
❌ You need maximum accuracy (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).


Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)

These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.

  • IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.

    • Use case: Best for ultra-low-memory devices where even Q4_K is too large.
    • Trade-off: Lower accuracy compared to higher-bit quantizations.
  • IQ3_S: Small block size for maximum memory efficiency.

    • Use case: Best for low-memory devices where IQ3_XS is too aggressive.
  • IQ3_M: Medium block size for better accuracy than IQ3_S.

    • Use case: Suitable for low-memory devices where IQ3_S is too limiting.
  • Q4_K: 4-bit quantization with block-wise optimization for better accuracy.

    • Use case: Best for low-memory devices where Q6_K is too large.
  • Q4_0: Pure 4-bit quantization, optimized for ARM devices.

    • Use case: Best for ARM-based devices or low-memory environments.

Summary Table: Model Format Selection

Model Format Precision Memory Usage Device Requirements Best Use Case
BF16 Highest High BF16-supported GPU/CPUs High-speed inference with reduced memory
F16 High High FP16-supported devices GPU inference when BF16 isn't available
Q4_K Medium Low Low CPU or Low-VRAM devices Best for memory-constrained environments
Q6_K Medium Moderate CPU with more memory Better accuracy while still being quantized
Q8_0 High Moderate CPU or GPU with enough VRAM Best accuracy among quantized models
IQ3_XS Very Low Very Low Ultra-low-memory devices Extreme memory efficiency and low accuracy
Q4_0 Low Low ARM or low-memory devices llama.cpp can optimize for ARM devices

Included Files & Details

DeepCoder-14B-Preview-bf16.gguf

  • Model weights preserved in BF16.
  • Use this if you want to requantize the model into a different format.
  • Best if your device supports BF16 acceleration.

DeepCoder-14B-Preview-f16.gguf

  • Model weights stored in F16.
  • Use if your device supports FP16, especially if BF16 is not available.

DeepCoder-14B-Preview-bf16-q8_0.gguf

  • Output & embeddings remain in BF16.
  • All other layers quantized to Q8_0.
  • Use if your device supports BF16 and you want a quantized version.

DeepCoder-14B-Preview-f16-q8_0.gguf

  • Output & embeddings remain in F16.
  • All other layers quantized to Q8_0.

DeepCoder-14B-Preview-q4_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q4_K.
  • Good for CPU inference with limited memory.

DeepCoder-14B-Preview-q4_k_s.gguf

  • Smallest Q4_K variant, using less memory at the cost of accuracy.
  • Best for very low-memory setups.

DeepCoder-14B-Preview-q6_k.gguf

  • Output & embeddings quantized to Q8_0.
  • All other layers quantized to Q6_K .

DeepCoder-14B-Preview-q8_0.gguf

  • Fully Q8 quantized model for better accuracy.
  • Requires more memory but offers higher precision.

DeepCoder-14B-Preview-iq3_xs.gguf

  • IQ3_XS quantization, optimized for extreme memory efficiency.
  • Best for ultra-low-memory devices.

DeepCoder-14B-Preview-iq3_m.gguf

  • IQ3_M quantization, offering a medium block size for better accuracy.
  • Suitable for low-memory devices.

DeepCoder-14B-Preview-q4_0.gguf

  • Pure Q4_0 quantization, optimized for ARM devices.
  • Best for low-memory environments.
  • Prefer IQ4_NL for better accuracy.

🚀 If you find these models useful

Please click "Like" if you find this useful!
Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks:
👉 Free Network Monitor

💬 How to test:

  1. Click the chat icon (bottom right on any page)
  2. Choose an AI assistant type:
    • TurboLLM (GPT-4-mini)
    • FreeLLM (Open-source)
    • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap scans
    • Quantum-readiness checks
    • Metasploit integration

🟡 TestLLM – Current experimental model (llama.cpp on 6 CPU threads):

  • Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs)
  • 🔧 Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟢 TurboLLM – Uses gpt-4-mini for:

🔵 HugLLM – Open-source models (≈8B params):

  • 2x more tokens than TurboLLM
  • AI-powered log analysis
  • 🌐 Runs on Hugging Face Inference API

💡 Example AI Commands to Test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a quick Nmap vulnerability test"
DeepCoder-14B-Preview
🚀 Democratizing Reinforcement Learning for LLMs (RLLM) 🌟

DeepCoder Overview

DeepCoder-14B-Preview is a code reasoning LLM fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning (RL) to scale up to long context lengths. The model achieves 60.6% Pass@1 accuracy on LiveCodeBench v5 (8/1/24-2/1/25), representing a 8% improvement over the base model (53%) and achieving similar performance to OpenAI's o3-mini with just 14B parameters.

Data

Our training dataset consists of approximately 24K unique problem-tests pairs compiled from:

  • Taco-Verified
  • PrimeIntellect SYNTHETIC-1
  • LiveCodeBench v5 (5/1/23-7/31/24)

Training Recipe

Our training recipe relies on an improved version of GRPO (GRPO+) and iterative context lengthening, introduced in DeepScaleR.

GRPO+

We enhance the original GRPO algorithm with insights from DAPO to enable more stable training:

  • Offline Difficulty Filtering: DAPO employs online dynamic sampling, discarding both entirely correct and entirely incorrect samples on the fly. While this helps maintain a more stable effective batch size, it introduces significant runtime overhead due to rejection sampling. Instead, we perform offline difficulty filtering on a subset of coding problems to ensure the training dataset remains within a suitable difficulty range.
  • No Entropy Loss: We observed that including an entropy loss term often led to instability, with entropy growing exponentially and ultimately collapsing training. To mitigate this, we eliminate the entropy loss entirely.
  • No KL Loss: Eliminating KL loss prevents the LLM from staying within trust region of the original SFT model. This removal also obviates the need to compute log probabilities for the reference policy, thereby accelerating training.
  • Overlong Filtering (from DAPO): To preserve long-context reasoning, we mask the loss for truncated sequences. This technique enables DeepCoder to generalize to 64K-context inference despite being trained with a 32K context.
  • Clip High (from DAPO): By increasing the upper bound in GRPO/PPO’s surrogate loss, we encourage more exploration and more stable entropy.

Iterative Context Lengthening

Our original Deepscaler-1.5B-Preview scaled long context training from 8K→16K→24K, achieving 33→38→43% on AIME respectively. Similarly, Deepcoder-14B-Preview is trained on 16K→32K, achieving 54→58% on LiveCodeBench (v5). DeepCoder-14B-Preview successfully generalizes to longer contexts when evaluated at 64K context, reaching 60.6%.

DeepCoder generalizes better to long contexts than the base distilled model, due to DAPO's overlong filtering. However, it's longer responses are often truncated when the max length is capped at 16K, which can lower its scores.

Model 16K 32K 64K
DeepCoder-14B-Preview 45.6 57.9 60.6
DeepSeek-R1-Distill-Qwen-14B 50.2 53.0 53.0

A more detailed description of the training recipe can be found in our blog post.

Evaluation

We evaluate Deepcoder-14B-Preview on various coding benchmarks, including LiveCodeBench (LCBv5), Codeforces, and HumanEval+.

Model LCB (v5)(8/1/24-2/1/25) Codeforces Rating Codeforces Percentile HumanEval+
DeepCoder-14B-Preview (ours) 60.6 1936 95.3 92.6
DeepSeek-R1-Distill-Qwen-14B 53.0 1791 92.7 92.0
O1-2024-12-17 (Low) 59.5 1991 96.1 90.8
O3-Mini-2025-1-31 (Low) 60.9 1918 94.9 92.6
O1-Preview 42.7 1658 88.5 89
Deepseek-R1 62.8 1948 95.4 92.6
Llama-4-Behemoth 49.4 - - -

Serving DeepCoder

Our model can be served using popular high-performance inference systems:

  • vLLM
  • Hugging Face Text Generation Inference (TGI)
  • SGLang
  • TensorRT-LLM

All these systems support the OpenAI Chat Completions API format.

Usage Recommendations

Our usage recommendations are similar to those of R1 and R1 Distill series:

  1. Avoid adding a system prompt; all instructions should be contained within the user prompt.
  2. temperature = 0.6
  3. top_p = 0.95
  4. This model performs best with max_tokens set to at least 64000

License

This project is released under the MIT License, reflecting our commitment to open and accessible AI development. We believe in democratizing AI technology by making our work freely available for anyone to use, modify, and build upon. This permissive license ensures that researchers, developers, and enthusiasts worldwide can leverage and extend our work without restrictions, fostering innovation and collaboration in the AI community.

Acknowledgement

Citation

@misc{deepcoder2025,
  title={DeepCoder: A Fully Open-Source 14B Coder at O3-mini Level},
  author={Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Stoica},
  howpublished={\url{https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-mini-Level-1cf81902c14680b3bee5eb349a512a51}},
  note={Notion Blog},
  year={2025}
}
Downloads last month
830
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Mungert/DeepCoder-14B-Preview-GGUF

Quantized
(120)
this model

Datasets used to train Mungert/DeepCoder-14B-Preview-GGUF