Ollama with GGUF
#6
by
smaram
- opened
I need some help, please. I’m trying to use Mixtral-8x7B-Instruct-v0.1-function-calling-v3.Q4_K.gguf
with Ollama, but I’m getting the following error:
ollama run mixtral-trelis
Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.ffn_down_exps.weight'
https://github.com/ollama/ollama/blob/main/docs/import.md#Importing-a-GGUF-based-model-or-adapter
smaram
changed discussion status to
closed
howdy did you solve this?