ubergarm commited on
Commit
478d13d
ยท
1 Parent(s): 3283389

initial commit

Browse files
Files changed (2) hide show
  1. .gitattributes +3 -0
  2. README.md +550 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ imatrix-*.dat filter=lfs diff=lfs merge=lfs -text
37
+ *.gguf filter=lfs diff=lfs merge=lfs -text
38
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,553 @@
1
  ---
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ quantized_by: ubergarm
3
+ pipeline_tag: text-generation
4
+ base_model: Qwen/Qwen3-Coder-30B-A3B-Instruct
5
  license: apache-2.0
6
+ license_link: https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct/blob/main/LICENSE
7
+ base_model_relation: quantized
8
+ tags:
9
+ - imatrix
10
+ - conversational
11
+ - qwen3_moe
12
+ - ik_llama.cpp
13
  ---
14
+
15
+ ## `ik_llama.cpp` imatrix Quantizations of Qwen/Qwen3-Coder-30B-A3B-Instruct
16
+ This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
17
+
18
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
19
+
20
+ Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP.
21
+
22
+ These quants provide best in class perplexity for the given memory footprint.
23
+
24
+ ## Big Thanks
25
+ Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
26
+
27
+ Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
28
+
29
+ ## Quant Collection
30
+ Perplexity computed against *wiki.test.raw*.
31
+
32
+ ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.")
33
+
34
+ These first three are just test quants for baseline perplexity comparison:
35
+ * `bf16` 56.894 GiB (16.007 BPW)
36
+ - Final estimate: PPL = TODO
37
+ * `Q8_0` 30.247 GiB (8.510 BPW)
38
+ - Final estimate: PPL = TODO
39
+ * `Q4_0` 16.111 GiB (4.533 BPW)
40
+ - Final estimate: PPL = TODO
41
+
42
+ ## `IQ5_K` 21.324 GiB (5.999 BPW)
43
+ Final estimate: PPL = TODO
44
+
45
+ <details>
46
+
47
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
48
+
49
+ ```bash
50
+ #!/usr/bin/env bash
51
+
52
+ custom="
53
+ # 48 Repeating Layers [0-47]
54
+
55
+ # Attention
56
+ blk\.(0)\.attn_q.*=q8_0
57
+ blk\.(0)\.attn_k.*=q8_0
58
+ blk\.(0)\.attn_v.*=q8_0
59
+ blk\.(0)\.attn_output.*=q8_0
60
+
61
+ blk\..*\.attn_q.*=iq5_k
62
+ blk\..*\.attn_k.*=iq6_k
63
+ blk\..*\.attn_v.*=iq6_k
64
+ blk\..*\.attn_output.*=iq5_k
65
+
66
+ # Routed Experts
67
+ blk\.(0|47)\.ffn_down_exps\.weight=q8_0
68
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=q8_0
69
+
70
+ blk\..*\.ffn_down_exps\.weight=iq6_k
71
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
72
+
73
+ # Non-Repeating Layers
74
+ token_embd\.weight=iq6_k
75
+ output\.weight=iq6_k
76
+ "
77
+
78
+ custom=$(
79
+ echo "$custom" | grep -v '^#' | \
80
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
81
+ )
82
+
83
+ ./build/bin/llama-quantize \
84
+ --custom-q "$custom" \
85
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
86
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
87
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ5_K.gguf \
88
+ IQ5_K \
89
+ 192
90
+ ```
91
+
92
+ </details>
93
+
94
+ ## `IQ4_K` 17.878 GiB (5.030 BPW)
95
+ Final estimate: PPL = TODO
96
+
97
+ <details>
98
+
99
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
100
+
101
+ ```bash
102
+ #!/usr/bin/env bash
103
+
104
+ custom="
105
+ # 48 Repeating Layers [0-47]
106
+
107
+ # Attention
108
+ blk\.(0)\.attn_q.*=q8_0
109
+ blk\.(0)\.attn_k.*=q8_0
110
+ blk\.(0)\.attn_v.*=q8_0
111
+ blk\.(0)\.attn_output.*=q8_0
112
+
113
+ blk\..*\.attn_q.*=iq5_k
114
+ blk\..*\.attn_k.*=iq6_k
115
+ blk\..*\.attn_v.*=iq6_k
116
+ blk\..*\.attn_output.*=iq5_k
117
+
118
+ # Routed Experts
119
+ blk\.(0|47)\.ffn_down_exps\.weight=q8_0
120
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=q8_0
121
+
122
+ blk\..*\.ffn_down_exps\.weight=iq5_k
123
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq4_k
124
+
125
+ # Non-Repeating Layers
126
+ token_embd\.weight=iq4_k
127
+ output\.weight=iq6_k
128
+ "
129
+
130
+ custom=$(
131
+ echo "$custom" | grep -v '^#' | \
132
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
133
+ )
134
+
135
+ ./build/bin/llama-quantize \
136
+ --custom-q "$custom" \
137
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
138
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
139
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ4_K.gguf \
140
+ IQ4_K \
141
+ 192
142
+ ```
143
+
144
+ </details>
145
+
146
+ ## `IQ4_KSS` 15.531 GiB (4.370 BPW)
147
+ Final estimate: PPL = TODO
148
+
149
+ <details>
150
+
151
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
152
+
153
+ ```bash
154
+ #!/usr/bin/env bash
155
+
156
+ custom="
157
+ # 48 Repeating Layers [0-47]
158
+
159
+ # Attention
160
+ blk\.(0)\.attn_q.*=q8_0
161
+ blk\.(0)\.attn_k.*=q8_0
162
+ blk\.(0)\.attn_v.*=q8_0
163
+ blk\.(0)\.attn_output.*=q8_0
164
+
165
+ blk\..*\.attn_q.*=iq5_k
166
+ blk\..*\.attn_k.*=iq6_k
167
+ blk\..*\.attn_v.*=iq6_k
168
+ blk\..*\.attn_output.*=iq5_k
169
+
170
+ # Routed Experts
171
+ blk\.(0|47)\.ffn_down_exps\.weight=q8_0
172
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=q8_0
173
+
174
+ blk\..*\.ffn_down_exps\.weight=iq4_ks
175
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss
176
+
177
+ # Non-Repeating Layers
178
+ token_embd\.weight=iq4_k
179
+ output\.weight=iq6_k
180
+ "
181
+
182
+ custom=$(
183
+ echo "$custom" | grep -v '^#' | \
184
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
185
+ )
186
+
187
+ ./build/bin/llama-quantize \
188
+ --custom-q "$custom" \
189
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
190
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
191
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ4_KSS.gguf \
192
+ IQ4_KSS \
193
+ 192
194
+ ```
195
+
196
+ </details>
197
+
198
+ ## `IQ4_KT` 14.438 GiB (4.062 BPW)
199
+ Final estimate: PPL = TODO
200
+
201
+ Mostly pure IQ4_KT meant for full GPU offload similar to [turboderp-org/exllamav3](https://github.com/turboderp-org/exllamav3) [check out ArtusDev's HuggingFace Page](https://huggingface.co/ArtusDev) for someh excellent EXL3 quants!
202
+
203
+ <details>
204
+
205
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
206
+
207
+ ```bash
208
+ #!/usr/bin/env bash
209
+
210
+ custom="
211
+ # 48 Repeating Layers [0-47]
212
+
213
+ # Attention
214
+ blk\..*\.attn_q.*=iq4_kt
215
+ blk\..*\.attn_k.*=iq4_kt
216
+ blk\..*\.attn_v.*=iq4_kt
217
+ blk\..*\.attn_output.*=iq4_kt
218
+
219
+ # Routed Experts
220
+ blk\..*\.ffn_down_exps\.weight=iq4_kt
221
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kt
222
+
223
+ # Non-Repeating Layers
224
+ token_embd\.weight=iq4_kt
225
+ output\.weight=iq6_k
226
+ "
227
+
228
+ custom=$(
229
+ echo "$custom" | grep -v '^#' | \
230
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
231
+ )
232
+
233
+ ./build/bin/llama-quantize \
234
+ --custom-q "$custom" \
235
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
236
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
237
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ4_KT.gguf \
238
+ IQ4_KT \
239
+ 192
240
+ ```
241
+
242
+ </details>
243
+
244
+ ## `IQ3_K` 14.509 GiB (4.082 BPW)
245
+ Final estimate: PPL = TODO
246
+
247
+ <details>
248
+
249
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
250
+
251
+ ```bash
252
+ #!/usr/bin/env bash
253
+
254
+ custom="
255
+ # 48 Repeating Layers [0-47]
256
+
257
+ # Attention
258
+ blk\.(0)\.attn_q.*=q8_0
259
+ blk\.(0)\.attn_k.*=q8_0
260
+ blk\.(0)\.attn_v.*=q8_0
261
+ blk\.(0)\.attn_output.*=q8_0
262
+
263
+ blk\..*\.attn_q.*=iq5_k
264
+ blk\..*\.attn_k.*=iq6_k
265
+ blk\..*\.attn_v.*=iq6_k
266
+ blk\..*\.attn_output.*=iq5_k
267
+
268
+ # Routed Experts
269
+ blk\.(0|47)\.ffn_down_exps\.weight=q8_0
270
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=q8_0
271
+
272
+ blk\..*\.ffn_down_exps\.weight=iq4_k
273
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq3_k
274
+
275
+ # Non-Repeating Layers
276
+ token_embd\.weight=iq4_k
277
+ output\.weight=iq6_k
278
+ "
279
+
280
+ custom=$(
281
+ echo "$custom" | grep -v '^#' | \
282
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
283
+ )
284
+
285
+ ./build/bin/llama-quantize \
286
+ --custom-q "$custom" \
287
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
288
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
289
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ3_K.gguf \
290
+ IQ3_K \
291
+ 192
292
+ ```
293
+
294
+ </details>
295
+
296
+ ## `IQ3_KS` 13.633 GiB (3.836 BPW)
297
+ Final estimate: PPL = TODO
298
+
299
+ <details>
300
+
301
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
302
+
303
+ ```bash
304
+ #!/usr/bin/env bash
305
+
306
+ custom="
307
+ # 48 Repeating Layers [0-47]
308
+
309
+ # Attention
310
+ blk\.(0)\.attn_q.*=q8_0
311
+ blk\.(0)\.attn_k.*=q8_0
312
+ blk\.(0)\.attn_v.*=q8_0
313
+ blk\.(0)\.attn_output.*=q8_0
314
+
315
+ blk\..*\.attn_q.*=iq4_ks
316
+ blk\..*\.attn_k.*=iq5_ks
317
+ blk\..*\.attn_v.*=iq5_ks
318
+ blk\..*\.attn_output.*=iq4_ks
319
+
320
+ # Routed Experts
321
+ blk\.(0|47)\.ffn_down_exps\.weight=q8_0
322
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=q8_0
323
+
324
+ blk\..*\.ffn_down_exps\.weight=iq4_ks
325
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq3_ks
326
+
327
+ # Non-Repeating Layers
328
+ token_embd\.weight=iq4_k
329
+ output\.weight=iq6_k
330
+ "
331
+
332
+ custom=$(
333
+ echo "$custom" | grep -v '^#' | \
334
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
335
+ )
336
+
337
+ ./build/bin/llama-quantize \
338
+ --custom-q "$custom" \
339
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
340
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
341
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ3_KS.gguf \
342
+ IQ3_KS \
343
+ 192
344
+ ```
345
+
346
+ </details>
347
+
348
+ ## `IQ2_KL` 11.516 GiB (3.240 BPW)
349
+ Final estimate: PPL = TODO
350
+
351
+ <details>
352
+
353
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
354
+
355
+ ```bash
356
+ #!/usr/bin/env bash
357
+
358
+ custom="
359
+ # 48 Repeating Layers [0-47]
360
+
361
+ # Attention
362
+ blk\.(0)\.attn_q.*=q8_0
363
+ blk\.(0)\.attn_k.*=q8_0
364
+ blk\.(0)\.attn_v.*=q8_0
365
+ blk\.(0)\.attn_output.*=q8_0
366
+
367
+ blk\..*\.attn_q.*=iq5_k
368
+ blk\..*\.attn_k.*=iq6_k
369
+ blk\..*\.attn_v.*=iq6_k
370
+ blk\..*\.attn_output.*=iq5_k
371
+
372
+ # Routed Experts
373
+ blk\.(0|47)\.ffn_down_exps\.weight=q8_0
374
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=q8_0
375
+
376
+ blk\..*\.ffn_down_exps\.weight=iq3_ks
377
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
378
+
379
+ # Non-Repeating Layers
380
+ token_embd\.weight=iq4_k
381
+ output\.weight=iq6_k
382
+ "
383
+
384
+ custom=$(
385
+ echo "$custom" | grep -v '^#' | \
386
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
387
+ )
388
+
389
+ ./build/bin/llama-quantize \
390
+ --custom-q "$custom" \
391
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
392
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
393
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ2_KL.gguf \
394
+ IQ2_KL \
395
+ 192
396
+ ```
397
+
398
+ </details>
399
+
400
+ ## `IQ2_KT` 9.469 GiB (2.664 BPW)
401
+ Final estimate: PPL = TODO
402
+
403
+ <details>
404
+
405
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
406
+
407
+ ```bash
408
+ #!/usr/bin/env bash
409
+
410
+ custom="
411
+ # 48 Repeating Layers [0-47]
412
+ blk\.(0)\.attn_q.*=iq5_ks
413
+ blk\.(0)\.attn_k.*=iq6_k
414
+ blk\.(0)\.attn_v.*=iq6_k
415
+ blk\.(0)\.attn_output.*=iq5_ks
416
+
417
+ # Attention
418
+ blk\..*\.attn_q.*=iq4_kt
419
+ blk\..*\.attn_k.*=iq5_ks
420
+ blk\..*\.attn_v.*=iq5_ks
421
+ blk\..*\.attn_output.*=iq4_kt
422
+
423
+ # Routed Experts
424
+ blk\.(0|47)\.ffn_down_exps\.weight=iq4_kt
425
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=iq4_kt
426
+
427
+ blk\..*\.ffn_down_exps\.weight=iq3_kt
428
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kt
429
+
430
+ # Non-Repeating Layers
431
+ token_embd\.weight=iq4_kt
432
+ output\.weight=iq6_k
433
+ "
434
+
435
+ custom=$(
436
+ echo "$custom" | grep -v '^#' | \
437
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
438
+ )
439
+
440
+ ./build/bin/llama-quantize \
441
+ --custom-q "$custom" \
442
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
443
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
444
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ2_KT.gguf \
445
+ IQ2_KT \
446
+ 192
447
+ ```
448
+
449
+ </summary>
450
+
451
+ ## `IQ1_KT` 7.583 GiB (2.133 BPW)
452
+ Final estimate: PPL = TODO
453
+
454
+ <details>
455
+
456
+ <summary>๐Ÿ‘ˆ Secret Recipe</summary>
457
+
458
+ ```bash
459
+ #!/usr/bin/env bash
460
+
461
+ custom="
462
+ # 48 Repeating Layers [0-47]
463
+ blk\.(0)\.attn_q.*=iq5_ks
464
+ blk\.(0)\.attn_k.*=iq6_k
465
+ blk\.(0)\.attn_v.*=iq6_k
466
+ blk\.(0)\.attn_output.*=iq5_ks
467
+
468
+ # Attention
469
+ blk\..*\.attn_q.*=iq4_kt
470
+ blk\..*\.attn_k.*=iq5_ks
471
+ blk\..*\.attn_v.*=iq5_ks
472
+ blk\..*\.attn_output.*=iq4_kt
473
+
474
+ # Routed Experts
475
+ blk\.(0|47)\.ffn_down_exps\.weight=iq4_kt
476
+ blk\.(0|47)\.ffn_(gate|up)_exps\.weight=iq4_kt
477
+
478
+ blk\..*\.ffn_down_exps\.weight=iq2_kt
479
+ blk\..*\.ffn_(gate|up)_exps\.weight=iq1_kt
480
+
481
+ # Non-Repeating Layers
482
+ token_embd\.weight=iq4_kt
483
+ output\.weight=iq6_k
484
+ "
485
+
486
+ custom=$(
487
+ echo "$custom" | grep -v '^#' | \
488
+ sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
489
+ )
490
+
491
+ ./build/bin/llama-quantize \
492
+ --custom-q "$custom" \
493
+ --imatrix /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/imatrix-Qwen3-Coder-30B-A3B-Instruct-BF16.dat \
494
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-BF16-00001-of-00002.gguf \
495
+ /mnt/raid/models/ubergarm/Qwen3-Coder-30B-A3B-Instruct-GGUF/Qwen3-Coder-30B-A3B-Instruct-IQ1_KT.gguf \
496
+ IQ1_KT \
497
+ 192
498
+ ```
499
+
500
+ </details>
501
+
502
+ ## Quick Start
503
+ #### Full GPU Offload with CUDA or Vulkan (for AMD GPUs)
504
+ ```bash
505
+ # Compile CUDA backend
506
+ cmake -B ./build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON -DGGML_SCHED_MAX_COPIES=1 -DGGML_CUDA_F16=ON
507
+ cmake --build ./build --config Release -j $(nproc)
508
+
509
+ # Compile Vulkan backend
510
+ # Experimental doesn't work with all quant types, need to test some more
511
+ # https://github.com/ikawrakow/ik_llama.cpp/discussions/590
512
+ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_HIPBLAS=0 -DGGML_VULKAN=1
513
+ cmake --build build --config Release -j $(nproc)
514
+
515
+ # Run Server
516
+ ./build/bin/llama-server \
517
+ --model Qwen3-Coder-30B-A3B-Instruct-IQ3_KS.gguf \
518
+ --alias ubergarm/Qwen3-Coder-30B-A3B-Instruct \
519
+ --ctx-size 32768 \
520
+ -ctk q8_0 -ctv q8_0 \
521
+ -fa -fmoe \
522
+ -ngl 99 \
523
+ --parallel 1 \
524
+ --threads 1 \
525
+ --host 127.0.0.1 \
526
+ --port 8080
527
+ ```
528
+
529
+ #### CPU-only Backend
530
+ ```bash
531
+ # Compile
532
+ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=0 -DGGML_VULKAN=0
533
+ cmake --build build --config Release -j $(nproc)
534
+
535
+ # Run Server
536
+ ./build/bin/llama-server \
537
+ --model Qwen3-Coder-30B-A3B-Instruct-IQ3_KS.gguf \
538
+ --alias ubergarm/Qwen3-Coder-30B-A3B-Instruct \
539
+ --ctx-size 32768 \
540
+ -ctk q8_0 -ctv q8_0 \
541
+ -fa -fmoe \
542
+ -ub 4096 -b 4096 \
543
+ --parallel 1 \
544
+ --threads 8 \
545
+ --host 127.0.0.1 \
546
+ --port 8080 \
547
+ --no-mmap
548
+ ```
549
+
550
+ ## References
551
+ * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
552
+ * [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
553
+ * [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)