Hunyuan-A13B

#1133
by jacek2024 - opened

@mradermacher I just updated ouer llama.cpp fork. Please update the workers and queue above mentioned models.

Can it be run in ollama? It's giving me an error when I try to load it

Can it be run in ollama? It's giving me an error when I try to load it

It works on latest llama.cpp so it should work on ollama and all the other llama.cpp based frontends as soon they update to a llama.cpp version that supports it. Make sure you update to the latest development build of ollama and if that doesn't help try again in a few days.

Sign up or log in to comment