Trouble running Q5_K_M With Llama.cpp
I pulled the latest llama.cpp repo and built it as I normally do with no problems. I downloaded Q5_K_M presumably with no errors but when I load llama-server I get an abort, without much helpful information. I tested my build with a different model and it loads fine. So then I assumed it was a failed model download so I redownloaded from scratch with huggingface-cli download and again that seemed to complete with no errors. Yet I get the same error with llama-server.
Can anyone confirm that llama.cpp can load this model?
@simusid oh so the latest llama.cpp doesn't yet have support for it - you'll have to use the limit or use https://github.com/unslothai/llama.cpp
I wrote details on how to do it in https://docs.unsloth.ai/basics/kimi-k2-how-to-run-locally
@danielhanchen . Thanks for that! I see your work on here and your comments on reddit and I want you to know it's appreciated.
Thank you @simusid :)
Hey all!
I'm also running into an issue running with llama.cpp. I downloaded the unslothai version of llama.cpp (and followed the tutorial steps):
git clone https://github.com/unslothai/llama.cpp
cmake llama.cpp -B llama.cpp/build
-DBUILD_SHARED_LIBS=OFF -DGGML_CUDA=OFF -DLLAMA_CURL=ON
cmake --build llama.cpp/build --config Release -j --clean-first --target llama-quantize llama-cli llama-gguf-split llama-mtmd-cli
cp llama.cpp/build/bin/llama-* llama.cpp
The only change I made was that I switched the CUDA to "OFF" as it gave an error (I am on a Mac, so perhaps not surprising).
When I try this command to launch Kimi-K2 (already locally available):
unslothai/llama.cpp/build/bin/llama-cli
--model unsloth/Kimi-K2-Instruct-GGUF/Kimi-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf
--n-gpu-layers 16
--temp 0.6
--min_p 0.01
--ctx-size 16384
--cache-type-k q8_0
--seed 3407
-ot ".ffn_.*_exps.=CPU"
I get these errors:
gguf_init_from_file: failed to open GGUF file 'unsloth/Kimi-K2-Instruct-GGUF/Kimi-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf'
llama_model_load: error loading model: llama_model_loader: failed to load model from unsloth/Kimi-K2-Instruct-GGUF/Kimi-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf
llama_model_load_from_file_impl: failed to load model
common_init_from_params: failed to load model 'unsloth/Kimi-K2-Instruct-GGUF/Kimi-K2-Instruct-UD-Q2_K_XL-00001-of-00008.gguf'
main: error: unable to load model
It does not really try to load the model at all. Any thoughts or ideas? Thanks!
Edit
Nevermind - this was due to a simple pathing error to my external SSD directory. My apologies. I can CONFIRM that the unslothai llama.cpp fork as described above does attempt to load the model. I'm still waiting for it to load, but wanted to update my post.
Edit #2
I can confirm the model loads successfully following the unsloth guide above with command llama-cli.
I do wish I could start a server via llama-server, but I am guessing that is not supported just yet!
Thanks again for all you do, unsloth!!
I'm unable to help @x-polyglot-x but I can say that with the link from @danielhanchen I was completely successful and I'm running the giant Q5 model with no issues :O
Yes, my apologies. I also followed the link from @danielhanchen and got it working! My issue was a pathing issue to the model itself (DOH!).
That said, I will likely wait to adopt or use this model seriously for llama-server support. I want to ask it longer prompts and upload files (etc) and doing that via a chat in the terminal is a bit too cumbersome for me. But, all in all, I'm glad I can run it and have it now!!