ubergarm commited on
Commit
a66cdac
Β·
1 Parent(s): 4ec6856

initial commit

Browse files
Files changed (2) hide show
  1. .gitattributes +3 -0
  2. README.md +118 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ imatrix-*.dat filter=lfs diff=lfs merge=lfs -text
37
+ *.gguf filter=lfs diff=lfs merge=lfs -text
38
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,121 @@
1
  ---
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ quantized_by: ubergarm
3
+ pipeline_tag: text-generation
4
+ base_model: Qwen/Qwen3-30B-A3B-Instruct-2507
5
  license: apache-2.0
6
+ license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Instruct-2507/blob/main/LICENSE
7
+ base_model_relation: quantized
8
+ tags:
9
+ - imatrix
10
+ - conversational
11
+ - qwen3_moe
12
+ - ik_llama.cpp
13
  ---
14
+
15
+ ## `ik_llama.cpp` imatrix Quantizations of Qwen/Qwen3-30B-A3B-Instruct-2507
16
+ This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
17
+
18
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
19
+
20
+ Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP.
21
+
22
+ These quants provide best in class perplexity for the given memory footprint.
23
+
24
+ ## Big Thanks
25
+ Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
26
+
27
+ Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
28
+
29
+ ## Quant Collection
30
+ Perplexity computed against *wiki.test.raw*. These first two are just test quants for baseline perplexity comparison:
31
+
32
+ ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.")
33
+
34
+ * `bf16` 56.894 GiB (16.007 BPW
35
+ - Final estimate: PPL = TODO
36
+ * `Q8_0` TODO GiB (TODO BPW)
37
+ - Final estimate: PPL = TODO
38
+
39
+ ## `IQ5_K` TODO GiB (TODO BPW)
40
+ Final estimate: PPL = TODO
41
+
42
+ <details>
43
+
44
+ <summary>πŸ‘ˆ Secret Recipe</summary>
45
+
46
+ ```bash
47
+ echo TODO
48
+ ```
49
+
50
+ </details>
51
+
52
+ ## `IQ4_K`
53
+
54
+ <details>
55
+
56
+ <summary>πŸ‘ˆ Secret Recipe</summary>
57
+
58
+ ```bash
59
+ echo TODO
60
+ ```
61
+
62
+ </details>
63
+
64
+ ## `IQ4_KSS`
65
+
66
+ <details>
67
+
68
+ <summary>πŸ‘ˆ Secret Recipe</summary>
69
+
70
+ ```bash
71
+ ```
72
+
73
+ </details>
74
+
75
+
76
+ ## `IQ3_K`
77
+
78
+ <details>
79
+
80
+ <summary>πŸ‘ˆ Secret Recipe</summary>
81
+
82
+ ```bash
83
+ ```
84
+
85
+ </details>
86
+
87
+ ## `IQ3_KS`
88
+
89
+ <details>
90
+
91
+ <summary>πŸ‘ˆ Secret Recipe</summary>
92
+
93
+ ```bash
94
+ ```
95
+
96
+ </details>
97
+
98
+
99
+ ## `IQ2_KL`
100
+
101
+ <details>
102
+
103
+ <summary>πŸ‘ˆ Secret Recipe</summary>
104
+
105
+ ```bash
106
+ ```
107
+
108
+ </details>
109
+
110
+ ## Quick Start
111
+ This example is for a single CUDA GPU hybrid infrencing with CPU/RAM. Check ik_llama.cpp discussions or my other quants for more examples for multi-GPU etc.
112
+
113
+ ```bash
114
+ echo TODO
115
+ ```
116
+
117
+ ## References
118
+ * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
119
+ * [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
120
+ * [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
121
+ * [eaddario/imatrix-calibration](https://huggingface.co/datasets/eaddario/imatrix-calibration)