WIP
- download fp8 safetensors
- cast fp8 safetensors to bf16 safetensors
- convert to bf16 GGUF
- calculate and upload imatrix from q8_0
- begin quantizing and releasing
Open a discussion if you have a specific target RAM+VRAM in mind for your rig and I'll see what I can do given the available quants. Cheers!
ik_llama.cpp
imatrix Quantizations of moonshotai/Kimi-K2-Instruct-0905
This quant collection REQUIRES ik_llama.cpp fork to support the ik's latest SOTA quants and optimizations! Do not download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
NOTE ik_llama.cpp
can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP.
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models!
Quant Collection
Compare with Perplexity of full size Q8_0
TODO
Final estimate: PPL = TODO
smol-IQ4_KSS
TODO
Final estimate: PPL = TODO
π Secret Recipe
echo TODO
IQ3_KS
TODO
Final estimate: PPL = TODO
π Secret Recipe
echo TODO
IQ2_KL
TODO
Final estimate: PPL = TODO
π Secret Recipe
echo TODO
IQ2_KS
TODO
Final estimate: PPL = TODO
π Secret Recipe
echo TODO
IQ1_KT
TODO
Final estimate: PPL = TODO
π Secret Recipe
echo TODO
Example Commands
Hybrid (multiple) CUDA + CPU
# Two CUDA devices with enough VRAM to offload more layers
# Keep in mind Kimi-K2 starts at 1 unlike DeepSeek at 3 (first dense layers)
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/Kimi-K2-Instruct-0905 \
--ctx-size 32768 \
-ctk q8_0 \
-fa -fmoe \
-mla 3 \
-ngl 99 \
-ot "blk\.(1|2|3)\.ffn_.*=CUDA0" \
-ot "blk\.(4|5|6)\.ffn_.*=CUDA1" \
-ot exps=CPU \
--parallel 1 \
--threads 48 \
--threads-batch 64 \
--host 127.0.0.1 \
--port 8080
CPU-Only (no GPU)
# compile
cmake -B build -DGGML_CUDA=0 -DGGML_BLAS=0 -DGGML_VULKAN=0
cmake --build build --config Release -j $(nproc)
# run server
# single CPU of a dual socket rig configured one NUMA per socket
numactl -N 0 -m 0 \
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/Kimi-K2-Instruct-0905 \
--ctx-size 98304 \
-ctk q8_0 \
-fa -fmoe \
-mla 3 \
--parallel 1 \
--threads 128 \
--threads-batch 192 \
--numa numactl \
--host 127.0.0.1 \
--port 8080
References
Model tree for ubergarm/Kimi-K2-Instruct-0905-GGUF
Base model
moonshotai/Kimi-K2-Instruct-0905