Llama.cpp hybrid layer quantization of Qwen3-Coder-30B-A3B-Instruct by Qwen
Original model: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct
The hybrid quant employs different quantization levels on a per layer basis to increase flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simulultaneously optimize quantized size and model performance. For this file the layer quants are as follows:
LAYER_TYPES='[
[0 ,"Q4_K_M"],[1 ,"Q4_K_M"],[2 ,"Q3_K_L"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_L"],[9 ,"Q3_K_M"],[10,"Q3_K_L"],[11,"Q3_K_M"],[12,"Q3_K_L"],[13,"Q3_K_M"],[14,"Q3_K_L"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_L"],[22,"Q3_K_L"],[23,"Q3_K_L"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q3_K_L"],[34,"Q4_K_S"],[35,"Q3_K_L"],[36,"Q4_K_S"],[37,"Q4_K_S"],[38,"Q4_K_S"],[39,"Q4_K_S"],
[40,"Q4_K_S"],[41,"Q4_K_S"],[42,"Q4_K_S"],[43,"Q4_K_S"],[44,"Q4_K_M"],[45,"Q5_K_S"],[46,"Q5_K_M"],[47,"Q6_K" ]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
These layer quants were optimized for good performance on both code and reasoning problems across a small set of curated test/eval prompts.
Comparison:
Quant | size | PPL | Comment |
---|---|---|---|
IQ4_XS | 16.6e9 | 9.3 | default embed and output |
Q4_K_H | 16.7e9 | 9.4 | Q4_K embed Q6_K output |
Usage:
This moe model can be efficiently run by offloading expert tensors to CPU via -ot exps=CPU to open up very large context space. The smaller size of the optimally quantized parameters will give an effective boost in CPU processing speed due to reducing the memory BW needed to repeatedly copy them from main memory to SIMD regs. It can also run fully offloaded on GPU via RPC or high VRAM GPU.
The model can be speculated using Qwen2.5-Coder-0.5B-Instruct if the inference platform can support vocabulay translation between draft and target. Approximate performance using a downstream speculator with llama.cpp with the model offloaded to two 4070s is 95t/s gen speed at a spec block length of 12 for code problems and 45 t/s at a spec block length of 4 for non code problems. If offloading expert tensors to CPU, on a 9900k the gen speed will drop to about 45 t/s while available context will increase from about 50k tokens offloaded to two 4070s to 100k tokens. Rough performance speculating with Qwen2.5-Coder-0.5B-Instruct :
Config | speculated code gen speed | F16 context size | Q8 context size |
---|---|---|---|
2 4070, RPC, fully offloaded to GPU | 95 t/s | ~50k tokens | ~88k tokens |
1 4070, -ot exps=CPU, CPU=9900k | 45 t/s | ~100k tokens | ~160k tokens |
High context performance appears to work verified against a needle in haystack test at 75k tokens.
Benchmarks:
Code evals for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm.
Download the file from below:
Link | Type | Size/e9 B | Notes |
---|---|---|---|
Qwen3-30B-Coder-A3B-Instruct.Q4_K_H.gguf | Q4_K_H | 16.7e9 B | ~IQ4_XS size |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- -
Model tree for steampunque/Qwen3-Coder-30B-A3B-Instruct-Hybrid-GGUF
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct