Unsloth template changes broke tool calling

#1
by squid2 - opened

I was having issues where when the model is performing multiple tool calls where I'd get an incorrectly formatted response from the model after the second tool call response is sent to the model, where the third tool call by the model will be within the response content and with an incorrect tool call format. I don't have this issue when using Mistral AI's official GGUFs here: https://huggingface.co/mistralai/Devstral-Small-2507_gguf

This is my example script performing tool calls:

import requests
import json

# --- 1. Define the Python functions to be used as tools ---

def get_current_weather(location: str, unit: str = "celsius") -> str:
    """
    Get the current weather in a given location.

    Args:
        location (str): The city and state, e.g., "San Francisco, CA"
        unit (str): The unit of temperature, can be "celsius" or "fahrenheit".

    Returns:
        str: A JSON string with the location, temperature, and unit.
    """
    print(f"--- Called function get_current_weather(location='{location}', unit='{unit}') ---")
    # In a real application, this would call a weather API.
    # For this example, we'll return mock data.
    weather_info = {
        "location": location,
        "temperature": "15",
        "unit": unit,
        "forecast": ["cloudy", "chance of rain"],
    }
    return json.dumps(weather_info)

def get_latest_news(topic: str) -> str:
    """
    Get the latest news headlines for a given topic.

    Args:
        topic (str): The topic to search for news, e.g., "technology".

    Returns:
        str: A JSON string with a list of news headlines.
    """
    print(f"--- Called function get_latest_news(topic='{topic}') ---")
    # In a real application, this would call a news API.
    news_info = {
        "topic": topic,
        "headlines": [
            "Mistral AI releases Devstral, a new model for developers.",
            "The future of AI in software development.",
            "Market trends for AI-powered tools.",
        ],
    }
    return json.dumps(news_info)

def calculate(expression: str) -> str:
    """
    Performs a mathematical calculation.

    Args:
        expression (str): The mathematical expression to evaluate.

    Returns:
        str: The result of the calculation.
    """
    print(f"--- Called function calculate(expression='{expression}') ---")
    try:
        # WARNING: eval() is not safe for production use with untrusted input.
        # This is for demonstration purposes only. Use a safe evaluation library.
        result = eval(expression)
        return json.dumps({"result": result})
    except Exception as e:
        return json.dumps({"error": str(e)})

# A dictionary to map function names to the actual functions
available_tools = {
    "get_current_weather": get_current_weather,
    "get_latest_news": get_latest_news,
    "calculate": calculate,
}

# --- 2. Define the main execution logic ---

def main():
    """
    Main function to run the tool-calling conversation loop.
    """
    # The URL of your local llama-server instance
    llama_server_url = "http://localhost:8080/v1/chat/completions"
    headers = {"Content-Type": "application/json"}

    # The prompt for the model
    user_prompt = "What's the weather like in Boston, and what is 5*7? After that, tell me the latest news on technology."
    print(f"User Prompt: {user_prompt}\n")

    # The conversation history
    messages = [{"role": "user", "content": user_prompt}]

    # Format the tools for the API request
    tools_for_api = [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "description": "Get the current weather in a given location.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g., San Francisco, CA",
                        },
                        "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                    },
                    "required": ["location"],
                },
            },
        },
        {
            "type": "function",
            "function": {
                "name": "get_latest_news",
                "description": "Get the latest news headlines for a given topic.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "topic": {
                            "type": "string",
                            "description": "The topic to search for news, e.g., technology",
                        }
                    },
                    "required": ["topic"],
                },
            },
        },
        {
            "type": "function",
            "function": {
                "name": "calculate",
                "description": "Performs a mathematical calculation.",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "expression": {
                            "type": "string",
                            "description": "The mathematical expression to evaluate.",
                        }
                    },
                    "required": ["expression"],
                },
            },
        },
    ]

    try:
        while True:
            # --- 3. Make the API call to the model ---
            payload = {
                "model": "devstral",
                "messages": messages,
                "tools": tools_for_api,
                "tool_choice": "auto",
            }

            #print(f"Calling llm with payload:\n{json.dumps(payload, indent=4)}")
            response = requests.post(llama_server_url, headers=headers, json=payload)
            response.raise_for_status()
            response_json = response.json()
            response_message = response_json["choices"][0]["message"]
            #print(f"Got LLM response:\n{json.dumps(response_message, indent=4)}")

            # --- 4. Add the entire assistant response to history ---
            messages.append(response_message)
            
            tool_calls = response_message.get("tool_calls")
            content = response_message.get("content")

            # --- 5. Display any text content from the model ---
            if content:
                # Label the response as "Interim" if tools are also being called
                response_type = "Interim Model Response" if tool_calls else "Final Model Answer"
                print(f"\n{response_type}:\n{content}\n")

            # --- 6. Check if the model wants to call tools ---
            if tool_calls:
                print("--- Model requested tool calls ---")
                
                # --- 7. Execute the requested tool calls ---
                for tool_call in tool_calls:
                    function_name = tool_call["function"]["name"]
                    function_to_call = available_tools.get(function_name)

                    if not function_to_call:
                        print(f"Error: Model tried to call unknown function '{function_name}'")
                        continue

                    function_args = json.loads(tool_call["function"]["arguments"])
                    function_response = function_to_call(**function_args)

                    # --- 8. Append the tool's response to the history ---
                    messages.append(
                        {
                            "tool_call_id": tool_call["id"],
                            "role": "tool",
                            "name": function_name,
                            "content": function_response,
                        }
                    )
                
                print("\n--- Sending tool results back to the model ---\n")
                continue # Continue the loop to get the next response

            else:
                # --- 9. No tool calls, so the conversation is complete ---
                break # Exit the loop

    except requests.exceptions.RequestException as e:
        print(f"An error occurred: {e}")
        print("Please ensure your llama-server is running and accessible at the specified URL.")
    except Exception as e:
        print(f"An unexpected error occurred: {e}")

# --- Entry point of the script ---
if __name__ == "__main__":
    main()

Here are the outputs with Mistral's own Devstral GGUFs:

User Prompt: What's the weather like in Boston, and what is 5*7? After that, tell me the latest news on technology.

--- Model requested tool calls ---
--- Called function get_current_weather(location='Boston, MA', unit='celsius') ---

--- Sending tool results back to the model ---

--- Model requested tool calls ---
--- Called function calculate(expression='5*7') ---

--- Sending tool results back to the model ---

--- Model requested tool calls ---
--- Called function get_latest_news(topic='technology') ---

--- Sending tool results back to the model ---


Final Model Answer:
The current weather in Boston, MA is 15°C with cloudy skies and a chance of rain. The result of 5*7 is 35. Here are the latest technology news headlines:
1. Mistral AI releases Devstral, a new model for developers.
2. The future of AI in software development.
3. Market trends for AI-powered tools.

Here are the outputs with the Unsloth Devstral GGUF:

User Prompt: What's the weather like in Boston, and what is 5*7? After that, tell me the latest news on technology.

--- Model requested tool calls ---
--- Called function get_current_weather(location='Boston, MA', unit='fahrenheit') ---

--- Sending tool results back to the model ---

--- Model requested tool calls ---
--- Called function calculate(expression='5*7') ---

--- Sending tool results back to the model ---


Final Model Answer:
{
  "tool_calls": [
    {
      "name": "get_latest_news",
      "arguments": {
        "topic": "technology"
      },
      "id": "9m9v29kVb"
    }
  ]
}

See that with the third/final tool call, the tool call has an incorrect format and is inside the response content. When the model responds correctly with a tool call, the response is structured like this:

{
    "role": "assistant",
    "content": null,
    "tool_calls": [
        {
            "type": "function",
            "function": {
                "name": "get_current_weather",
                "arguments": "{\"location\":\"Boston\"}"
            },
            "id": "d47f862c7"
        }
    ]
}

When the model does an improper tool call (as happens on the third tool call), the response is structured like this:

{
    "role": "assistant",
    "content": "{\n  \"tool_calls\": [\n    {\n      \"name\": \"get_latest_news\",\n      \"arguments\": {\n        \"topic\": \"technology\"\n      },\n      \"id\": \"d47f862c9\"\n    }\n  ],\n  \"content\": \"The result of 5*7 is 35.\"\n}"
}

@squid2 We can investigate but Mistral's official gguf doesnt support tool-calling so unsure how youre getting results with it.

We tested every scenario for tool calling and it worked perfectly so unsure why you're not having the same experience

Where are you using this btw? llama.cpp, lmstudio?

I'm using llama.cpp (with llama-server). Even though I didn't see any mentions of tools in the chat template for Mistral's official Devstral GGUF, tool calling worked perfectly out of the box with it. Here are the version details and the command I used to launch llama-server.

sultan@Sultans-MacBook-Pro llm_scripts % llama-server --version
version: 5840 (75c91de6)
built with Apple clang version 17.0.0 (clang-1700.0.13.3) for arm64-apple-darwin24.4.0

sultan@Sultans-MacBook-Pro llm_scripts % cat devstral_unsloth.sh
llama-server -hf unsloth/Devstral-Small-2507-GGUF:IQ4_XS --temp 0.15 -fa -ctk q8_0 -ctv q8_0 -c 262144 -np 2 --jinja

sultan@Sultans-MacBook-Pro llm_scripts % cat devstral.sh 
llama-server -hf mistralai/Devstral-Small-2507_gguf:Q4_K_M --temp 0.15 -fa -ctk q8_0 -ctv q8_0 -c 262144 -np 2 --jinja

It looks like vllm, and a parser I'm working on, was using the existing format of [TOOL_CALLS][ <valid JSON of tool calls> ] where the new template uses {{- "[TOOL_CALLS]" + tool.function.name + "[ARGS]" + arguments }}

https://github.com/vllm-project/vllm/blob/e2de455c349df8385b18fe447beb6325dcb6af9c/vllm/entrypoints/openai/tool_parsers/mistral_tool_parser.py#L78

The Ollama template for 2505 was also the following:

{{- else if .ToolCalls }}[TOOL_CALLS][
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{- end }}]

On the latest build of llama.cpp, the tool calling chain breaks after several calls, just as described earlier. If the new tool call template really is the intended one - oddly, it lacks closing tags for [TOOL_CALLS] and [ARGS] - then the root cause is likely that llama.cpp still relies on the older format internally:

common/chat.cpp

1810     // Mistral Nemo (w/ tools)
1811     if (src.find("[TOOL_CALLS]") != std::string::npos) {
1812         return common_chat_params_init_mistral_nemo(tmpl, params);
1813     }  

I haven’t really dug into this, but here's my quick-fix Jinja template that keeps the legacy tool calls format. It works in llama.cpp, but if the new tool call template is the intended approach, this workaround could hurt model performance.

You can provide it to llama.cpp using the --chat-template-file argument or LLAMA_ARG_CHAT_TEMPLATE_FILE environment variable.

{%- set default_system_message = 'You are Devstral, a helpful agentic model trained by Mistral AI and using the OpenHands scaffold. You can interact with a computer to solve tasks.\n\n<ROLE>\nYour primary role is to assist users by executing commands, modifying code, and solving technical problems effectively. You should be thorough, methodical, and prioritize quality over speed.\n* If the user asks a question, like \"why is X happening\", don\'t try to fix the problem. Just give an answer to the question.\n</ROLE>\n\n<EFFICIENCY>\n* Each action you take is somewhat expensive. Wherever possible, combine multiple actions into a single action, e.g. combine multiple bash commands into one, using sed and grep to edit/view multiple files at once.\n* When exploring the codebase, use efficient tools like find, grep, and git commands with appropriate filters to minimize unnecessary operations.\n</EFFICIENCY>\n\n<FILE_SYSTEM_GUIDELINES>\n* When a user provides a file path, do NOT assume it\'s relative to the current working directory. First explore the file system to locate the file before working on it.\n* If asked to edit a file, edit the file directly, rather than creating a new file with a different filename.\n* For global search-and-replace operations, consider using `sed` instead of opening file editors multiple times.\n</FILE_SYSTEM_GUIDELINES>\n\n<CODE_QUALITY>\n* Write clean, efficient code with minimal comments. Avoid redundancy in comments: Do not repeat information that can be easily inferred from the code itself.\n* When implementing solutions, focus on making the minimal changes needed to solve the problem.\n* Before implementing any changes, first thoroughly understand the codebase through exploration.\n* If you are adding a lot of code to a function or file, consider splitting the function or file into smaller pieces when appropriate.\n</CODE_QUALITY>\n\n<VERSION_CONTROL>\n* When configuring git credentials, use \"openhands\" as the user.name and \"[email protected]\" as the user.email by default, unless explicitly instructed otherwise.\n* Exercise caution with git operations. Do NOT make potentially dangerous changes (e.g., pushing to main, deleting repositories) unless explicitly asked to do so.\n* When committing changes, use `git status` to see all modified files, and stage all files necessary for the commit. Use `git commit -a` whenever possible.\n* Do NOT commit files that typically shouldn\'t go into version control (e.g., node_modules/, .env files, build directories, cache files, large binaries) unless explicitly instructed by the user.\n* If unsure about committing certain files, check for the presence of .gitignore files or ask the user for clarification.\n</VERSION_CONTROL>\n\n<PULL_REQUESTS>\n* When creating pull requests, create only ONE per session/issue unless explicitly instructed otherwise.\n* When working with an existing PR, update it with new commits rather than creating additional PRs for the same issue.\n* When updating a PR, preserve the original PR title and purpose, updating description only when necessary.\n</PULL_REQUESTS>\n\n<PROBLEM_SOLVING_WORKFLOW>\n1. EXPLORATION: Thoroughly explore relevant files and understand the context before proposing solutions\n2. ANALYSIS: Consider multiple approaches and select the most promising one\n3. TESTING:\n   * For bug fixes: Create tests to verify issues before implementing fixes\n   * For new features: Consider test-driven development when appropriate\n   * If the repository lacks testing infrastructure and implementing tests would require extensive setup, consult with the user before investing time in building testing infrastructure\n   * If the environment is not set up to run tests, consult with the user first before investing time to install all dependencies\n4. IMPLEMENTATION: Make focused, minimal changes to address the problem\n5. VERIFICATION: If the environment is set up to run tests, test your implementation thoroughly, including edge cases. If the environment is not set up to run tests, consult with the user first before investing time to run tests.\n</PROBLEM_SOLVING_WORKFLOW>\n\n<SECURITY>\n* Only use GITHUB_TOKEN and other credentials in ways the user has explicitly requested and would expect.\n* Use APIs to work with GitHub or other platforms, unless the user asks otherwise or your task requires browsing.\n</SECURITY>\n\n<ENVIRONMENT_SETUP>\n* When user asks you to run an application, don\'t stop if the application is not installed. Instead, please install the application and run the command again.\n* If you encounter missing dependencies:\n  1. First, look around in the repository for existing dependency files (requirements.txt, pyproject.toml, package.json, Gemfile, etc.)\n  2. If dependency files exist, use them to install all dependencies at once (e.g., `pip install -r requirements.txt`, `npm install`, etc.)\n  3. Only install individual packages directly if no dependency files are found or if only specific packages are needed\n* Similarly, if you encounter missing dependencies for essential tools requested by the user, install them when possible.\n</ENVIRONMENT_SETUP>\n\n<TROUBLESHOOTING>\n* If you\'ve made repeated attempts to solve a problem but tests still fail or the user reports it\'s still broken:\n  1. Step back and reflect on 5-7 different possible sources of the problem\n  2. Assess the likelihood of each possible cause\n  3. Methodically address the most likely causes, starting with the highest probability\n  4. Document your reasoning process\n* When you run into any major issue while executing a plan from the user, please don\'t try to directly work around it. Instead, propose a new plan and confirm with the user before proceeding.\n</TROUBLESHOOTING>' %}

{%- if messages[0]["role"] == "system" %}
    {%- set system_message = messages[0]["content"] %}
    {%- set loop_messages = messages[1:] %}
{%- else %}
    {%- set system_message = default_system_message %}
    {%- set loop_messages = messages %}
{%- endif %}
{%- if not tools is defined %}
    {%- set tools = none %}
{%- endif %}
{%- set user_messages = loop_messages | selectattr("role", "equalto", "user") | list %}

{#- This block checks for alternating user/assistant messages, skipping tool calling messages #}
{%- set ns = namespace() %}
{%- set ns.index = 0 %}
{%- for message in loop_messages %}
    {%- if not (message.role == "tool" or message.role == "tool_results" or (message.tool_calls is defined and message.tool_calls is not none)) %}
        {%- if (message["role"] == "user") != (ns.index % 2 == 0) %}
            {{- raise_exception("After the optional system message, conversation roles must alternate user/assistant/user/assistant/...") }}
        {%- endif %}
        {%- set ns.index = ns.index + 1 %}
    {%- endif %}
{%- endfor %}

{{- bos_token }}
{%- if system_message is defined %}
{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}
{%- endif %}


{%- if tools is defined and tools is not none and tools|length > 0 %}

    {%- set has_tools = true %}
    {%- set tools_description = "[AVAILABLE_TOOLS]" + (tools | tojson) + "[/AVAILABLE_TOOLS]" %}

    {{- tools_description }}

{%- endif %}

{%- for message in loop_messages %}
    {%- if message["role"] == "user" %}
    {%- if message["content"] is string %}
            {{- "[INST]" + message["content"] + "[/INST]" }}
        {%- endif %}
    {%- elif (message.tool_calls is defined and message.tool_calls is not none) %}
        {{- "[TOOL_CALLS][" }}
        {%- for tool_call in message.tool_calls %}
            {%- set out = tool_call.function|tojson %}
            {{- out[:-1] }}
            {{- ', "id": "' + tool_call.id + '"}' }}
            {%- if not loop.last %}
                {{- ", " }}
            {%- else %}
                {{- "]" + eos_token }}
            {%- endif %}
        {%- endfor %}
    {%- elif message["role"] == "assistant" %}
        {{- message["content"] + eos_token}}
    {%- elif message["role"] == "tool_results" or message["role"] == "tool" %}
        {%- if message.content is defined and message.content.content is defined %}
            {%- set content = message.content.content %}
        {%- else %}
            {%- set content = message.content %}
        {%- endif %}
        {{- '[TOOL_RESULTS]{"content": ' + content|string + ", " }}
        {{- '"call_id": "' + message.tool_call_id + '"}[/TOOL_RESULTS]' }}
    {%- else %}
        {{- raise_exception("Only user and assistant roles are supported, with the exception of an initial optional system message!") }}
    {%- endif %}
{%- endfor %}

Same here.

Unsloth AI org

Much apologies on the delay, but I rechecked using mistral-common and the GGUF - all tests and tool calling seem to be exactly equivalent - it is in fact possible llama.cpp themselves have not used the latest Devstral tool calling approach, since it is different.

@danielhanchen I am using the UD Q6_K_XL variant, and tool calling under Kilo code (Roocode fork) is also failing. It's outputting the following text:

Let's start by examining the TrainingPathCertificationsForm.php file to understand how the certification type is saved during editing:read_file>


app/Modules/TrainingPaths/Details/TrainingPathCertificationsForm.php


</read_file>

I am using the --jinja template through llama.cpp. Any ideas why this is happening?

@danielhanchen Interestingly, tool calling works fine if I remove --jinja. So something seems broken there!

Sign up or log in to comment