steampunque commited on
Commit
33d2e39
·
verified ·
1 Parent(s): 42e74fd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
4
+ base_model_relation: quantized
5
+ tags:
6
+ - Mistral
7
+ - Mistral-Small
8
+ - GGUF
9
+ - quantized
10
+ - 4-bit
11
+ ---
12
+
13
+ ## Llama.cpp hybrid layer quantization of Mistral-Small-3.2-24B-Instruct-2506 by mistralai
14
+
15
+ Original model: https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506
16
+
17
+ The hybrid quant employs different quantization levels on a per layer basis to increased
18
+ flexibility of trading off performance vs file size. Less parameter bits are used at deep layers
19
+ and more bits at cortex layers to simultaneously optimize quantized size and model performance.
20
+ This quant was optimized for similar size and performance as an IQ4_XS quant while using all K quants
21
+ to increase processing efficiency on old GPUs or CPUs.
22
+
23
+ The layer quant is as follows:
24
+ ```
25
+ Q4_K_H:
26
+ LAYER_TYPES='[
27
+ [0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
28
+ [8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
29
+ [16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
30
+ [24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
31
+ [32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q5_K_S"],[38,"Q5_K_M"],[39,"Q6_K"]
32
+ ]'
33
+ FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
34
+ ```
35
+ This quant was optimized for good reasoning performance on a select set of test prompts.
36
+
37
+ Comparison:
38
+
39
+ Quant | size | PPL | Comment
40
+ ---------|---------|------|-----------
41
+ Q4_K_H | 12.7e9 | 5.45 | slightly smaller than IQ4_XS, similar performance
42
+ IQ4_XS | 12.9e9 | 5.36 | not tested, should work well
43
+
44
+ Usage:
45
+
46
+ This is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs
47
+ and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd
48
+ readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
49
+ To run it on a 12G VRAM GPU use approximately --ngl 32. Generation speed is still quite good with partial offload.
50
+
51
+ Benchmarks:
52
+
53
+ A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
54
+
55
+ ## Download the file from below:
56
+ | Link | Type | Size/e9 B | Notes |
57
+ |------|------|-----------|-------|
58
+ | [Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf](https://huggingface.co/steampunque/Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF/resolve/main/Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf) | Q4_K_H | 12.7e9 B | ~IQ4_XS quality/size |
59
+ | [Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf](https://huggingface.co/steampunque/Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF/resolve/main/Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf) | mmproj | 0.88e9 B | multimedia projector |
60
+
61
+ A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
62
+
63
+ https://github.com/ggml-org/llama.cpp/discussions/13040