How to Import Models into Ollama

#1
by jier - opened

I want to import a GGUF model into Ollama. How should I write the Modelfile?

I'm using ollama run hf.co/Chun121/qwen3-4B-rpg-roleplay:Q4_K_M, but it's not running correctly. Could you provide a more detailed usage guide?

print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
llama_model_load: error loading model: missing tensor 'blk.0.attn_k_norm.weight'
llama_model_load_from_file_impl: failed to load model
⠏ panic: unable to load model: /root/.ollama/models/blobs/sha256-41a09afb836d2b5ab96d3e56197d11deabfe3b6f2b79d313687af4a4c4afd43e

goroutine 162 [running]:
github.com/ollama/ollama/runner/llamarunner.(*Server).loadModel(0xc00063c000, {0x25, 0x0, 0x1, {0x0, 0x0, 0x0}, 0xc00061a220, 0x0}, {0x7fff2236a10a, ...}, ...)
github.com/ollama/ollama/runner/llamarunner/runner.go:751 +0x395
created by github.com/ollama/ollama/runner/llamarunner.Execute in goroutine 1
github.com/ollama/ollama/runner/llamarunner/runner.go:848 +0xb57
⠋ time=2025-06-04T14:16:29.413+08:00 level=INFO source=server.go:625 msg="waiting for server to become available" status="llm server not responding"
time=2025-06-04T14:16:29.415+08:00 level=ERROR source=server.go:457 msg="llama runner terminated" error="exit status 2"
⠸ time=2025-06-04T14:16:29.664+08:00 level=ERROR source=sched.go:489 msg="error loading llama server" error="llama runner process has terminated: error loading model: missing tensor 'blk.0.attn_k_norm.weight'\nllama_model_load_from_file_impl: failed to load model"
[GIN] 2025/06/04 - 14:16:29 | 500 | 2.428391592s | 127.0.0.1 | POST "/api/generate"
Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.attn_k_norm.weight'
llama_model_load_from_file_impl: failed to load model
root@autodl-container-b9b842b3fc-4445dc68:~/autodl-tmp# time=2025-06-04T14:16:34.757+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.09328595 runner.size="4.7 GiB" runner.vram="4.7 GiB" runner.parallel=2 runner.pid=26130 runner.model=/root/.ollama/models/blobs/sha256-41a09afb836d2b5ab96d3e56197d11deabfe3b6f2b79d313687af4a4c4afd43e
time=2025-06-04T14:16:35.054+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.390141856 runner.size="4.7 GiB" runner.vram="4.7 GiB" runner.parallel=2 runner.pid=26130 runner.model=/root/.ollama/models/blobs/sha256-41a09afb836d2b5ab96d3e56197d11deabfe3b6f2b79d313687af4a4c4afd43e
time=2025-06-04T14:16:35.471+08:00 level=WARN source=sched.go:687 msg="gpu VRAM usage didn't recover within timeout" seconds=5.806662337 runner.size="4.7 GiB" runner.vram="4.7 GiB" runner.parallel=2 runner.pid=26130 runner.model=/root/.ollama/models/blobs/sha256-41a09afb836d2b5ab96d3e56197d11deabfe3b6f2b79d313687af4a4c4afd43e

you must first download the unsloth.Q4_K_M.gguf file locally, generate a base Modelfile template from the official qwen3:4b Modelfile, then edit it to reference the downloaded GGUF file. After creating the custom Modelfile, register the model with Ollama via ollama create and finally run it with ollama run. The error missing tensor 'blk.0.attn_k_norm.weight' indicates that Ollama could not locate the correct GGUF file or that the FROM path in the Modelfile was incorrect.

use command:
ollama show --modelfile qwen3:4b > Modelfile

it should give a file like:
FROM qwen3:4b
PARAMETER ...
TEMPLATE """
...
"""
SYSTEM """
...
"""

change the path/FROM line and template as necessary, here is example:

FROM ./unsloth.Q4_K_M.gguf
PARAMETER temperature 0.7
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
TEMPLATE """
<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
SYSTEM """You are a helpful assistant."""


Register new model:
ollama create chun/qwen3-4b-rpg:Q4_K_M -f Modelfile

finally check it:
ollama list

and run:
ollama run chun/qwen3-4b-rpg:Q4_K_M

Hope it helps :)

for absolute model path add FROM like this:

FROM /home/username/models/qwen3-4B-rpg/unsloth.Q4_K_M.gguf

for absolute model path add FROM like this:

FROM /home/username/models/qwen3-4B-rpg/unsloth.Q4_K_M.gguf

Thank you for your help. Can I run it on a Mac computer?

for absolute model path add FROM like this:

FROM /home/username/models/qwen3-4B-rpg/unsloth.Q4_K_M.gguf

Thank you for your help. Can I run it on a Mac computer?

yep pretty sure ollama have same behaviour on all platforms~

for absolute model path add FROM like this:

FROM /home/username/models/qwen3-4B-rpg/unsloth.Q4_K_M.gguf

Thank you for your help. Can I run it on a Mac computer?

yep pretty sure ollama have same behaviour on all platforms~

Thank you for you reply, I used the ollama run command and got an error:
Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.attn_k_norm.weight'. can use other ways?

for absolute model path add FROM like this:

FROM /home/username/models/qwen3-4B-rpg/unsloth.Q4_K_M.gguf

Thank you for your help. Can I run it on a Mac computer?

yep pretty sure ollama have same behaviour on all platforms~

Thank you for you reply, I used the ollama run command and got an error:
Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.attn_k_norm.weight'. can use other ways?

that's weird, make sure the path is correct in modelfile and you run,

ollama create chun/qwen3-4b-rpg:Q4_K_M -f Modelfile

also make sure ollama is latest version, older versions can't run qwen 3 models.

for absolute model path add FROM like this:

FROM /home/username/models/qwen3-4B-rpg/unsloth.Q4_K_M.gguf

Thank you for your help. Can I run it on a Mac computer?

yep pretty sure ollama have same behaviour on all platforms~

Thank you for you reply, I used the ollama run command and got an error:
Error: llama runner process has terminated: error loading model: missing tensor 'blk.0.attn_k_norm.weight'. can use other ways?

that's weird, make sure the path is correct in modelfile and you run,

ollama create chun/qwen3-4b-rpg:Q4_K_M -f Modelfile

also make sure ollama is latest version, older versions can't run qwen 3 models.

iShot_2025-06-05_08.50.13.png
preview.jpg

I'm about to break down. Even after I changed the 'From' field in the Modelfile to an absolute path, I still got the same error. I tried running qwen3, and it worked fine.

Sign up or log in to comment