Tool Calls Not Working with –jinja Option
First, thank you very much for always distributing high-performance GGUF models.
I downloaded the Q4_K_XL model, but tool calls are not working properly in Claude Code.
I loaded the model with the –jinja option, but when I enter a command in Claude Code, only text appears like this and no actual call is executed:
“
⏺ I’ll help you clean up unused CSS in the TeacherEntry.vue file. Let me first read the file to analyze the template and styles.
Read
file_path
/Users/sampro/math-geogebra/math-geogebra-front/src/components/Entry/TeacherEntry.vue
“
Just in case, I tried using the original model’s template with –jinja –chat-template-file official_glm45_template.jinja, but got the following:
“
common_init_from_params: setting dry_penalty_last_n to ctx_size = 131072
common_init_from_params: warming up the model with an empty run - please wait … (–no-warmup to disable)
common_chat_templates_init: failed to parse chat template (defaulting to chatml): Expected comma in tuple at row 47, column 102:
{{ visible_text(m.content) }}
{{- ‘/nothink’ if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith(”/nothink”)) else ‘’ -}}
{%- elif m.role == ‘assistant’ -%}
srv init: initializing slots, n_slots = 1
slot init: id 0 | task -1 | new slot n_ctx_slot = 131072
main: model loaded
“
After this, it looked like the model was being called from Claude Code, but it ended up stuck in endless tool invocation and I had to forcibly terminate it.
Does this mean there is a problem with the template? I’d appreciate any help.
Thank you as always.
Edit : I used the version built from the source this morning.
I have the same problem but with qwen code. Does it use XML template like qwen3-coder? I am not sure.
Hi, I have the same or a similar problem:
- llama-b6089-bin-win-cuda-12.4-x64
- GLM-4.5-Air-IQ4_NL-00001-of-00002.gguf downloaded at 2025-08-05T09:32:00Z (UTC)
- .\llama-server.exe -ngl 99 -c 60000 -fa -ctk q8_0 -ctv q8_0 -m "[...]\GLM-4.5-Air-GGUF\GLM-4.5-Air-IQ4_NL-00001-of-00002.gguf" --jinja -ub 512 --temp 0.6 --top-p 0.95 --top-k 20 --repeat-penalty 1.05 --n-cpu-moe 44 --no-mmap -sm none
- tested with Cline and RooCode
I get the following error from llama.cpp as soon as a tool call result gets posted to the model:
srv log_server_r: request: POST /v1/chat/completions 127.0.0.1 200
got exception: {"code":500,"message":"Value is not callable: null at row 56, column 70:\n {%- if '' in content %}\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n ^\n {%- set content = (content.split('')|last).lstrip('\n') %}\n at row 56, column 72:\n {%- if '' in content %}\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n ^\n {%- set content = (content.split('')|last).lstrip('\n') %}\n at row 56, column 85:\n {%- if '' in content %}\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n ^\n {%- set content = (content.split('')|last).lstrip('\n') %}\n at row 56, column 106:\n {%- if '' in content %}\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n ^\n {%- set content = (content.split('')|last).lstrip('\n') %}\n at row 56, column 108:\n {%- if '' in content %}\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n ^\n {%- set content = (content.split('')|last).lstrip('\n') %}\n at row 56, column 9:\n {%- if '' in content %}\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n ^\n {%- set content = (content.split('')|last).lstrip('\n') %}\n at row 55, column 36:\n{%- else %}\n {%- if '' in content %}\n ^\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n at row 55, column 5:\n{%- else %}\n {%- if '' in content %}\n ^\n {%- set reasoning_content = ((content.split('')|first).rstrip('\n').split('')|last).lstrip('\n') %}\n at row 54, column 12:\n {%- set reasoning_content = m.reasoning_content %}\n{%- else %}\n ^\n {%- if '' in content %}\n at row 52, column 1:\n{%- set content = visible_text(m.content) %}\n{%- if m.reasoning_content is string %}\n^\n {%- set reasoning_content = m.reasoning_content %}\n at row 48, column 35:\n{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not content.endswith("/nothink")) else '' -}}\n{%- elif m.role == 'assistant' -%}\n ^\n<|assistant|>\n at row 45, column 1:\n{% for m in messages %}\n{%- if m.role == 'user' -%}<|user|>\n^\n{% set content = visible_text(m.content) %}{{ content }}\n at row 44, column 24:\n{%- endfor %}\n{% for m in messages %}\n ^\n{%- if m.role == 'user' -%}<|user|>\n at row 44, column 1:\n{%- endfor %}\n{% for m in messages %}\n^\n{%- if m.role == 'user' -%}<|user|>\n at row 1, column 1:\n[gMASK]\n^\n{%- if tools -%}\n","type":"server_error"}
But just omitting the parameter --jinja seems to work for me..
Looks like built in "fixed" jinja template is just broken 😔
https://github.com/ggml-org/llama.cpp/pull/15086
Is this possibly related to the issue?
Edit : I tried applying it, but with the --jinja option, it still only outputs the text for the tool call, and the tool doesn’t actually get invoked.
I can confirm tool calls are not working for GLM-4.5-Air-UD-Q2_K_XL.gguf
file hash SHA256: a54d4870a1b4f5bc6aed46ae36984a0bd0783fad6bff67ff8369309f19927fa0
I do not see an error in llama.cpp though, using build: 6091 (22f060c9)
I can confirm tool calls are not working for
GLM-4.5-Air-UD-Q2_K_XL.gguf
file hashSHA256: a54d4870a1b4f5bc6aed46ae36984a0bd0783fad6bff67ff8369309f19927fa0
I do not see an error in llama.cpp though, using build: 6091 (22f060c9)
Which agent tool(cline, roo, claude code, ..etc) did you test it on?
Which agent tool(cline, roo, claude code, ..etc) did you test it on?
Self programmed agent. I am using llama.cpp with --jinja
and let it do the tool call parsing etc. using the OpenAI compatible API chat endpoint of llama.cpp
I'm also seeing tool calling not work.
The chat template from the model creator instructs the model to format its tool calls in XML: https://huggingface.co/zai-org/GLM-4.5-Air/blob/main/chat_template.jinja.
These GGUFs faithfully reproduce that language, so that's what the model is doing. Here's an example of what the IQ4_XS quant outputted for a tool call:
<tool_call>tool_brave_web_search_post
<arg_key>query</arg_key>
<arg_value>test search</arg_value>
<arg_key>count</arg_key>
<arg_value>5</arg_value>
</tool_call>
But then the app (I'm using open-webui) doesn't parse it and treat it as a tool call (I think it only handles json-formatted tool calls).
Maybe apps need to be modified to handle xml-formatted tool calls.
GLM 4.5 anthropic style tool calling jinja template - https://pastebin.com/CfMw7hFS
But,
common_chat_templates_init: failed to parse chat template (defaulting to chatml): Expected comma in tuple at row 53, column 102:
{{ visible_text(m.content) }}
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
But, it worked because "failed to parse chat template (defaulting to chatml)"
Edit2 : I made two modifications in the jinja template from this link, and it works in Claude code.
- .endswith() error
From:
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
To:
{%- set user_content = visible_text(m.content) -%}
{{ user_content }}
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not user_content.endswith("/nothink")) else '' -}}
- Removed (ensure_ascii=False)
{{ tool | tojson(ensure_ascii=False) }} → {{ tool | tojson }}
{{ tc.arguments | tojson(ensure_ascii=False) }} → {{ tc.arguments | tojson }}
I hope this helps someone.
I understand that Claude Code and Roo Code use XML format for tool invocation, and that the default format for GLM 4.5 is also XML.
However, when I use the JSON-style Jinja(https://pastebin.com/CfMw7hFS), Claude Code and Roo Code work well, whereas using the default XML-style Jinja results in errors.
I don’t understand why this happens. If anyone is familiar with this issue, please explain.
Edit:
I confirmed that when I only modify the modifications I mentioned above in the original template (XML Tool Call) of GLM 4.5,
tool calls work well when using roo code (XML Tool Call) in GLM 4.5 AIR running on llama.cpp.
Just
From GLM 4.5 original jinja:
{{ visible_text(m.content) }}
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not visible_text(m.content).endswith("/nothink")) else '' -}}
To:
{%- set temp_content = visible_text(m.content) %}
{{ temp_content }}
{{- '/nothink' if (enable_thinking is defined and not enable_thinking and not temp_content.endswith("/nothink")) else '' -}}
You can now use XML templates!