Inference does not run using Ollama 0.6.8 β Mistral-Small-3.1-24B-Instruct-2503-GGUF:Q4_K_M
Hi there, I'm fairly new to LLMs, but this quantisation doesn't seem to run using Ollama 0.6.8 [MacBook Air M4 32GB, macOS 15.4.1]:
Error: llama runner process has terminated: exit status 2
Relevant output from server log:
clip_model_loader: tensor[220]: n_dims = 2, name = v.blk.9.ffn_gate.weight, tensor_size=8388608, offset=861257728, shape:[1024, 4096, 1, 1], type = f16
clip_model_loader: tensor[221]: n_dims = 2, name = v.blk.9.ffn_up.weight, tensor_size=8388608, offset=869646336, shape:[1024, 4096, 1, 1], type = f16
clip_model_loader: tensor[222]: n_dims = 1, name = v.blk.9.ln2.weight, tensor_size=4096, offset=878034944, shape:[1024, 1, 1, 1], type = f32
load_hparams: projector: pixtral
load_hparams: has_llava_proj: 0
load_hparams: minicpmv_version: 0
load_hparams: proj_scale_factor: 0
load_hparams: n_wa_pattern: 0
load_hparams: use_silu: 1
load_hparams: use_gelu: 0
load_hparams: model size: 837.36 MiB
load_hparams: metadata size: 0.08 MiB
clip_init: failed to load model '/Users/user/.ollama/models/blobs/sha256-402640c0a0e4e00cdb1e94349adf7c2289acab05fee2b20ee635725ef588f994': operator(): unable to find tensor mm.1.bias
ggml_metal_free: deallocating
panic: unable to load clip model: /Users/user/.ollama/models/blobs/sha256-402640c0a0e4e00cdb1e94349adf7c2289acab05fee2b20ee635725ef588f994
goroutine 37 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0x14000176360, {0x29, 0x0
, 0x1, 0x0, {0x0, 0x0, 0x0}, 0x140004a7a10, 0x0}, ...)
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:795 +0x264
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
/Users/runner/work/ollama/ollama/runner/llamarunner/runner.go:887 +0x994
time=2025-05-10T12:24:32.056+10:00 level=INFO source=server.go:623 msg="waiting for server to become available" status="llm server not responding"
time=2025-05-10T12:24:32.307+10:00 level=ERROR source=sched.go:458 msg="error loading llama server" error="llama runner process has terminated: exit status 2"
Thank you!
Edit: Sorry, just saw an update in the other discussion β issue may be caused by Ollama not using latest PR llama.cpp
https://huggingface.co/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF/discussions/2#681def7f67aec79d34b39d8f