ubergarm commited on
Commit
4de9ffa
Β·
1 Parent(s): 5eae6ef

initial commit

Browse files

prepping and downloading

Files changed (2) hide show
  1. .gitattributes +3 -0
  2. README.md +155 -0
.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.gguf filter=lfs diff=lfs merge=lfs -text
37
+ *.png filter=lfs diff=lfs merge=lfs -text
38
+ imatrix-*.dat filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,5 +1,160 @@
1
  ---
 
 
 
2
  license: other
3
  license_name: modified-mit
4
  license_link: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/blob/main/LICENSE
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ quantized_by: ubergarm
3
+ pipeline_tag: text-generation
4
+ base_model: moonshotai/Kimi-K2-Instruct-0905
5
  license: other
6
  license_name: modified-mit
7
  license_link: https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905/blob/main/LICENSE
8
+ base_model_relation: quantized
9
+ tags:
10
+ - mla
11
+ - imatrix
12
+ - conversational
13
+ - ik_llama.cpp
14
  ---
15
+
16
+ ## **WIP**
17
+
18
+ - [ ] download fp8 safetensors
19
+ - [ ] convert to bf16 GGUF
20
+ - [ ] calculate and upload imatrix from q8_0
21
+ - [ ] begin quantizing and releasing
22
+
23
+ Open a discussion if you have a specific target RAM+VRAM in mind for your rig and I'll see what I can do given the available quants. Cheers!
24
+
25
+ ## `ik_llama.cpp` imatrix Quantizations of moonshotai/Kimi-K2-Instruct-0905
26
+ This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support the ik's latest SOTA quants and optimizations! Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
27
+
28
+ *NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
29
+
30
+ Some of ik's new quants are supported with [Nexesenex/croco.cpp](https://github.com/Nexesenex/croco.cpp) fork of KoboldCPP.
31
+
32
+ These quants provide best in class perplexity for the given memory footprint.
33
+
34
+ ## Big Thanks
35
+ Shout out to Wendell and the **Level1Techs** crew, the community [Forums](https://forum.level1techs.com/t/deepseek-deep-dive-r1-at-home/225826), [YouTube Channel](https://www.youtube.com/@Level1Techs)! **BIG thanks** for providing **BIG hardware** expertise and access to run these experiments and make these great quants available to the community!!!
36
+
37
+ Also thanks to all the folks in the quanting and inferencing community on [BeaverAI Club Discord](https://huggingface.co/BeaverAI) and on [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/) for tips and tricks helping each other run, test, and benchmark all the fun new models!
38
+
39
+ ## Quant Collection
40
+ Compare with Perplexity of full size `Q8_0` TODO
41
+
42
+ Final estimate: PPL = TODO
43
+
44
+ ![Perplexity Chart](images/perplexity.png "Chart showing Perplexity improving as BPW increases.")
45
+
46
+ ### `smol-IQ4_KSS` TODO
47
+ Final estimate: PPL = TODO
48
+
49
+ <details>
50
+
51
+ <summary>πŸ‘ˆ Secret Recipe</summary>
52
+
53
+ ```bash
54
+ echo TODO
55
+ ```
56
+
57
+ </details>
58
+
59
+ ### `IQ3_KS` TODO
60
+ Final estimate: PPL = TODO
61
+
62
+ <details>
63
+
64
+ <summary>πŸ‘ˆ Secret Recipe</summary>
65
+
66
+ ```bash
67
+ echo TODO
68
+ ```
69
+
70
+ </details>
71
+
72
+
73
+ ### `IQ2_KL` TODO
74
+ Final estimate: PPL = TODO
75
+
76
+ <details>
77
+
78
+ <summary>πŸ‘ˆ Secret Recipe</summary>
79
+
80
+ ```bash
81
+ echo TODO
82
+ ```
83
+
84
+ </details>
85
+
86
+ ### `IQ2_KS` TODO
87
+ Final estimate: PPL = TODO
88
+
89
+ <details>
90
+
91
+ <summary>πŸ‘ˆ Secret Recipe</summary>
92
+
93
+ ```bash
94
+ echo TODO
95
+ ```
96
+
97
+ </details>
98
+
99
+ ### `IQ1_KT` TODO
100
+ Final estimate: PPL = TODO
101
+
102
+ <details>
103
+
104
+ <summary>πŸ‘ˆ Secret Recipe</summary>
105
+
106
+ ```bash
107
+ echo TODO
108
+ ```
109
+
110
+ </details>
111
+
112
+ ## Example Commands
113
+ ### Hybrid (multiple) CUDA + CPU
114
+ ```bash
115
+ # Two CUDA devices with enough VRAM to offload more layers
116
+ # Keep in mind Kimi-K2 starts at 1 unlike DeepSeek at 3 (first dense layers)
117
+ ./build/bin/llama-server \
118
+ --model "$model"\
119
+ --alias ubergarm/Kimi-K2-Instruct-0905 \
120
+ --ctx-size 32768 \
121
+ -ctk q8_0 \
122
+ -fa -fmoe \
123
+ -mla 3 \
124
+ -ngl 99 \
125
+ -ot "blk\.(1|2|3)\.ffn_.*=CUDA0" \
126
+ -ot "blk\.(4|5|6)\.ffn_.*=CUDA1" \
127
+ -ot exps=CPU \
128
+ --parallel 1 \
129
+ --threads 48 \
130
+ --threads-batch 64 \
131
+ --host 127.0.0.1 \
132
+ --port 8080
133
+ ```
134
+
135
+ ### CPU-Only (no GPU)
136
+ ```bash
137
+ # compile
138
+ cmake -B build -DGGML_CUDA=0 -DGGML_BLAS=0 -DGGML_VULKAN=0
139
+ cmake --build build --config Release -j $(nproc)
140
+
141
+ # run server
142
+ # single CPU of a dual socket rig configured one NUMA per socket
143
+ numactl -N 0 -m 0 \
144
+ ./build/bin/llama-server \
145
+ --model "$model"\
146
+ --alias ubergarm/Kimi-K2-Instruct-0905 \
147
+ --ctx-size 98304 \
148
+ -ctk q8_0 \
149
+ -fa -fmoe \
150
+ -mla 3 \
151
+ --parallel 1 \
152
+ --threads 128 \
153
+ --threads-batch 192 \
154
+ --numa numactl \
155
+ --host 127.0.0.1 \
156
+ --port 8080
157
+ ```
158
+
159
+ ## References
160
+ * [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)