new GLM-4.1V-9B models
#1115
by
jacek2024
- opened
They are queued! :D
You can check for progress at http://hf.tst.eu/status.html or regularly check the model summary page at https://hf.tst.eu/model#GLM-4.1V-9B-Base-GGUF and https://hf.tst.eu/model#GLM-4.1V-9B-Thinking-GGUF for quants to appear.
Glm4vForConditionalGeneration
is not yet supported by llama.cpp and we also have not yet marked this architecture as vision model. We unfortunately need to wait until llama.cpp implements support for it.