Llama.cpp hybrid layer quantization of Mistral-Small-3.2-24B-Instruct-2506 by mistralai
Original model: https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506
The hybrid quant employs different quantization levels on a per layer basis to increased flexibility of trading off performance vs file size. Less parameter bits are used at deep layers and more bits at cortex layers to simultaneously optimize quantized size and model performance. This quant was optimized for similar size and performance as an IQ4_XS quant while using all K quants to increase processing efficiency on old GPUs or CPUs.
The layer quant is as follows:
Q4_K_H:
LAYER_TYPES='[
[0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q5_K_S"],[38,"Q5_K_M"],[39,"Q6_K"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
This quant was optimized for good reasoning performance on a select set of test prompts.
Comparison:
Quant | size | PPL | Comment |
---|---|---|---|
Q4_K_H | 12.7e9 | 5.45 | slightly smaller than IQ4_XS, similar performance |
IQ4_XS | 12.9e9 | 5.36 | not tested, should work well |
Usage:
This is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md . To run it on a 12G VRAM GPU use approximately --ngl 32. Generation speed is still quite good with partial offload.
Benchmarks:
A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
Download the file from below:
Link | Type | Size/e9 B | Notes |
---|---|---|---|
Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf | Q4_K_H | 12.7e9 B | ~IQ4_XS quality/size |
Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf | mmproj | 0.88e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
- Downloads last month
- 39
Model tree for steampunque/Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503