https://huggingface.co/huihui-ai/Huihui-GLM-4.1V-9B-Thinking-abliterated

#1171
by X5R - opened

they only abliterated the thinking model, no base model. so just this.

It's queued! :D

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Huihui-GLM-4.1V-9B-Thinking-abliterated-GGUF for quants to appear.

is GLM 9B thinking supported by llama.cpp? I don't think so

is GLM 9B thinking supported by llama.cpp? I don't think so

It unfortunately is not currently supported by llama.cpp. I don't manually check if a model is supported before queuing unless I suspect it likely not being supported. GLM llama.cpp support is such a mess that I lost all overview of it. We had so many GML based models fail in the past due to improper llama.cpp support that at some point I will likely just mass requeue all the failed ones and hope they got fixed in the meantime.

-2000   21 si Huihui-GLM-4.1V-9B-Thinking-abliterated      error/1 ~Glm4vForConditionalGeneration

Sign up or log in to comment