Broken results
Tried Q4_K_L on llama.cpp b5133 server and it's unusable due to extreme repetition issues. Something seems clearly broken.
I can confirm that I'm experiencing this as well on the IQ4_XS quant. I tried the Q_4_K_M quant it was broken as well.
This has been opened as an issue for llama.cpp on github:
Shoot.. hopefully is a small fix, ideally not the quant! But if it is I'll remake it promptly!
Just as a note, see https://www.reddit.com/r/LocalLLaMA/comments/1jzn9wj/comment/mn7iv7f
By using these arguments: --flash-attn -ctk q4_0 -ctv q4_0 --ctx-size 16384 --override-kv tokenizer.ggml.eos_token_id=int:151336 --override-kv glm4.rope.dimension_count=int:64 --jinja
I was able to make the IQ4_XS quant work well for me on the lastest build of llama.cpp
Just as a note, see https://www.reddit.com/r/LocalLLaMA/comments/1jzn9wj/comment/mn7iv7f
By using these arguments:
--flash-attn -ctk q4_0 -ctv q4_0 --ctx-size 16384 --override-kv tokenizer.ggml.eos_token_id=int:151336 --override-kv glm4.rope.dimension_count=int:64 --jinja
I was able to make the IQ4_XS quant work well for me on the lastest build of llama.cpp
Thanks for your info, but why did you set kv cache to q4? Won't the default fp16 get better accuracy?
Yes, you can use fp16 for better accuracy, I use q4 because I have 16GB VRAM and want to fit as much of the model as possible onto the GPU
Shoot.. hopefully is a small fix, ideally not the quant! But if it is I'll remake it promptly!
Hi, would like to notify that pull 13021 has been merged and official release b5173 contains it. Quant re-creation is required.
Yup, I've started to remake now