Vision not working on ollama - Tried updating templates

#2
by usamakenway - opened

Model doesnt understand the image on ollama . Tried original template and some other too.
If it works for someone else, lemme know.

did you add the mmproj file?

adding the mmproj as second FROMor downloading HF via eg. ollama run hf.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF:Q4_K_XLleads to this:

clip_init: failed to load model '/home/user/.ollama/models/blobs/sha256-ca4d5ba2c7659021bf457abb2a3f346e9b83169655ab7ccd570b53dbe692abb2': load_hparams: unknown projector type: pixtral
Unsloth AI org

@Mdubbya @owao @usamakenway

We just updated the models to now support vision thanks to the latest PR in llama.cpp

I am getting similar but different error:

clip_init: failed to load model '/Users/felikz/.ollama/models/blobs/sha256-f5add93ad360ef6ccba571bba15e8b4bd4471f3577440a8b18785f8707d987ed': operator(): unable to find tensor mm.1.bias

Ollama 0.6.8 / M1 Max 32GB

Other models works well.

Same here: clip_init: failed to load model '/Users/Christian/.ollama/models/blobs/sha256-f5add93ad360ef6ccba571bba15e8b4bd4471f3577440a8b18785f8707d987ed': operator(): unable to find tensor mm.1.bias

Ok might be because ollamadidnt update to the latest PR in llama.cpp

Ok might be because ollamadidnt update to the latest PR in llama.cpp

OK, thanks for taking the time to bring the explanation :)

Sign up or log in to comment