ubergarm commited on
Commit
bc32e07
·
1 Parent(s): aba3964

add BF16 and Q8_0 size info

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ quantized_by: ubergarm
3
+ pipeline_tag: text-generation
4
+ base_model: zai-org/GLM-4.5-Air
5
+ license: mit
6
+ base_model_relation: quantized
7
+ tags:
8
+ - imatrix
9
+ - conversational
10
+ - ik_llama.cpp
11
+ ---
12
+
13
+ This is an experimental place-holder with an imatrix not for general purpose use just yet. I'm not releasing any quants for this just yet until the various PRs are in place and tested better.
14
+
15
+ Check the References below for the the github discussion as folks are working on adding support for this model.
16
+
17
+ Keep an eye out for new PR and follow along, once this thing is tested and considered working correctly I hope to release some quants for both this smaller Air model and the larger one too..
18
+
19
+ ## `ik_llama.cpp` imatrix Quantizations of zai-org/GLM-4.5-Air
20
+ This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
21
+
22
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
23
+
24
+ Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP.
25
+
26
+ These quants provide best in class perplexity for the given memory footprint.
27
+
28
+ ## Big Thanks
29
+ Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
30
+
31
+ Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
32
+
33
+ ## Quant Collection
34
+ Perplexity computed against *wiki.test.raw*.
35
+
36
+ ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.")
37
+
38
+ These first two are just test quants for baseline perplexity comparison:
39
+ * `BF16` 203.436 GiB (16.004 BPW)
40
+ - Final estimate: PPL = TODO
41
+ * `Q8_0` 108.119 GiB (8.505 BPW)
42
+ - Final estimate: PPL = TODO
43
+
44
+ TODO
45
+
46
+ ## Quick Start
47
+ ```bash
48
+ TODO
49
+ ```
50
+
51
+ ## References
52
+ * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
53
+ * [Getting Started Guide (already out of date lol)](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
54
+ * [ubergarm-imatrix-calibration-corpus-v02.txt](https://gist.github.com/ubergarm/edfeb3ff9c6ec8b49e88cdf627b0711a?permalink_comment_id=5682584#gistcomment-5682584)
55
+ * [Mainline llama.cpp Draft PR14939](https://github.com/ggml-org/llama.cpp/pull/14939)
56
+ * [ik_llama.cpp GLM-4.5 MoE PR668](https://github.com/ikawrakow/ik_llama.cpp/pull/668)