Build Agents with smolagents on Azure ML

This example showcases how build agents with smolagents, leveraging Large Language Models (LLMs) from the Hugging Face Collection in Azure ML, deployed as Managed Online Endpoints, powered by Hugging Face’s Text Generation Inference (TGI).

This example is not intended to be a in-detail example on how to deploy Large Language Models (LLMs) on Azure ML but rather focused on how to build agents with it, this being said, it’s highly recommended to read more about Azure ML deployments in the example “Deploy Large Language Models (LLMs) on Azure ML”.

TL;DR Smolagents is an open-source Python library designed to make it extremely easy to build and run agents using just a few lines of code. Text Generation Inference (TGI) is a solution developed by Hugging Face for deploying and serving LLMs and VLMs with high performance text generation. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.


This example will specifically deploy Qwen/Qwen2.5-Coder-32B-Instruct from the Hugging Face Hub (or see Qwen2.5-Coder-32B-Instruct page on Azure ML) as an Azure ML Managed Online Endpoint, which is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen), bringing the following improvements upon CodeQwen1.5:

Qwen2.5 Coder 32B Instruct on the Hugging Face Hub

Qwen2.5 Coder 32B Instruct on Azure ML

For more information, make sure to check their model card on the Hugging Face Hub.

Note that you can select any LLM available on the Hugging Face Hub with the “Deploy to AzureML” option enabled, or directly select any of the LLMs available in the Azure ML Model Catalog under the “HuggingFace” collection.

Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the Azure Machine Learning Tutorial: Create resources you need to get started.

Azure Account

A Microsoft Azure account with an active subscription. If you don’t have a Microsoft Azure account, you can now create one for free, including 200 USD worth of credits to use within the next 30 days after the account creation.

Azure CLI

The Azure CLI (az) installed on the instance that you’re running this example on, see the installation steps, and follow the steps of the prefered method based on your instance. Then log in into your subscription as follows:

az login

More information at Sign in with Azure CLI - Login and Authentication.

Azure Resource Group

An Azure Resource Group under the one you will create the Azure ML workspace and the rest of the required resources. If you don’t have one, you can create it as follow:

az group create --name huggingface-azure-rg --location eastus

Then, you can ensure that the resource group was created successfully by e.g. listing all the available resource groups that you have access to on your subscription:

az group list --output table

More information at Manage Azure resource groups by using Azure CLI.

You can also create the Azure Resource Group via the Azure Portal, via the Azure ML Studio when creating the Azure ML Workspace as described below, or via the Azure Resource Management Python SDK (requires it to be installed as pip install azure-mgmt-resource in advance).

Azure ML Workspace

An Azure ML workspace under the subscription and resource group aforementioned. If you don’t have one, you can create it as:

az ml workspace create \
    --name huggingface-azure-ws \
    --resource-group huggingface-azure-rg \
    --location eastus

Then, you can ensure that the workspace was created successfully by e.g. listing all the available workspaces that you have access to on your subscription:

az ml workspace list --resource-group huggingface-azure-rg --output table

More information at Tutorial: Create resources you need to get started - Create the workspace and find more information about Azure ML Workspace at What is an Azure Machine Learning workspace?.

You can also create the Azure ML Workspace via the Azure ML Studio, via the Azure Portal, or via the Azure ML Python SDK.

Setup and installation

In this example, the Azure Machine Learning SDK for Python will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install azure-identity to authenticate with your Azure credentials via Python.

%pip install azure-ai-ml azure-identity --upgrade --quiet

More information at Azure Machine Learning SDK for Python.

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AML_WORKSPACE_NAME <YOUR_AML_WORKSPACE_NAME>

Finally, you also need to define both the Azure ML Endpoint and Deployment names, as those will be used throughout the example too (note that those need to be unique per region, so add a timestamp or a region-specific identifier if needed; and between 3 and 32 characters long):

%env AML_ENDPOINT_NAME qwen-coder-endpoint
%env AML_DEPLOYMENT_NAME qwen-coder-deployment

Authenticate to Azure ML

Initially, you need to authenticate to create a new Azure ML client with your credentials, which will be later used to deploy the Hugging Face model, Qwen/Qwen2.5-Coder-32B-Instruct in this case, into an Azure ML Endpoint.

import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AML_WORKSPACE_NAME"),
)

Create and Deploy Managed Online Endpoint

Before creating the Azure ML Endpoint, you need to build the Azure ML Model URI, which is formatted as it follows azureml://registries/<REGISTRY_NAME>/models/<MODEL_ID>/labels/latest, that means that the REGISTRY_NAME should be set to “HuggingFace” as you intend to deploy a model from the Hugging Face Collection on the Azure ML Model Catalog, and the MODEL_ID won’t be the Hugging Face Hub ID, but rather the ID with hyphen replacements for both backslash (/) and underscores (_), as it follows:

model_id = "Qwen/Qwen2.5-Coder-32B-Instruct"

model_uri = (
    f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
)
model_uri

Note that you will need to verify in advance that the URI is valid, and that the given Hugging Face Hub Model ID exists on Azure, since Hugging Face is publishing those models into their collection, meaning that some models may be available on the Hugging Face Hub but not yet on the Azure ML Model Catalog (you can request adding a model following the guide Request a model addition).

Alternatively, you can use the following snippet to verify if a model is available on the Azure ML Model Catalog programmatically:

import requests

response = requests.get(f"https://generate-azureml-urls.azurewebsites.net/api/generate?modelId={model_id}")
if response.status_code != 200:
    print(
        "[{response.status_code=}] {model_id=} not available on the Hugging Face Collection in Azure ML Model Catalog"
    )

Then, once the model URI has been built correctly and that the model exists on Azure ML, then you can create the Managed Online Endpoint specifying its name (note that the name must be unique per region, so it’s a nice practice to add some sort of unique name to it in case multi-region deployments are intended) via the ManagedOnlineEndpoint Python class.

Also note that by default the ManagedOnlineEndpoint will use the key authentication method, meaning that there will be a primary and secondary key that should be sent within the Authentication headers as a Bearer token; but also the aml_token authentication method can be used, read more about it at Authenticate clients for online endpoints.

The deployment, created via the ManagedOnlineDeployment Python class, will define the actual model deployment that will be exposed via the previously created endpoint. The ManagedOnlineDeployment will expect: the model i.e., the previously created URI azureml://registries/HuggingFace/models/qwen-qwen2.5-coder-32b-instruct/labels/latest, the endpoint_name, and the instance requirements being the instance_type and the instance_count.

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in Supported Hardware); in this case, a NVIDIA H100 GPU will be used i.e., Standard_NC40ads_H100_v5.

Since for some models and inference engines you need to run those on a GPU-accelerated instance, you may need to request a quota increase for some of the supported instances as per the model you want to deploy. Also, keep into consideration that each model comes with a list of all the supported instances, being the recommended one for each tier the lower instance in terms of available VRAM. Read more about quota increase requests for Azure ML at Manage and increase quotas and limits for resources with Azure Machine Learning.

from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("AML_ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("AML_DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("AML_ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
client.begin_create_or_update(endpoint).wait()

Azure ML Endpoint from Azure ML Studio

client.online_deployments.begin_create_or_update(deployment).wait()

Azure ML Deployment from Azure ML Studio

Note that whilst the Azure ML Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could aswell take longer depending on the instance provisioning and availability.

Once deployed, via the Azure ML Studio you’ll be able to inspect the logs at https://ml.azure.com/endpoints/realtime/qwen-coder-endpoint/logs, see how to consume the deployed API at https://ml.azure.com/endpoints/realtime/qwen-coder-endpoint/consume, or check their (on preview) model monitoring feature at https://ml.azure.com/endpoints/realtime/qwen-coder-endpoint/Monitoring.

If you named your Azure ML Endpoint differently (set via the AML_ENDPOINT_NAME environment variable, you’ll need to update the URLs above as https://ml.azure.com/endpoints/realtime/<AML_ENDPOINT_NAME> for those to work as expected.

More information about the Azure ML Managed Online Endpoints and Deploy and score a machine learning model by using an online endpoint (which can be deployed via the az CLI, the Azure ML SDK for Python as above, from the Azure ML Studio, from the Hugging Face Hub from the given model card, or from an ARM Template).

Build agents with smolagents

Now that the Azure ML Endpoint is running, you can start sending requests to it. Since there are multiple approaches, but the following is just covering the OpenAI Python SDK approach, you should visit e.g. Deploy Large Language Models (LLMs) on Azure ML to see different alternatives.

So on, the steps to follow for building the agent are going to be:

  1. Create the OpenAI client with smolagents, connected to the running Azure ML Endpoint via the smolagents.OpenAIServerModel (note that smolagents also exposes the smolagents.AzureOpenAIServerModel but that’s the client for using OpenAI via the Azure, not to connect to Azure ML).
  2. Define the set of tools that the agent will have access to i.e., Python functions with the smolagents.tool decorator.
  3. Create the smolagents.CodeAgent leveraging the code-LLM deployed on Azure ML, adding the set tools previously defined, so that the agent can use those when appropiate, using a local executor (not recommended if code to be executed is sensible or unidentified).

Create OpenAI Client

Since Text Generation Inference (TGI) also exposes OpenAI-compatible routes, you can also leverage the OpenAI Python SDK via smolagents to send requests to the deployed Azure ML Endpoint.

To use the OpenAI Python SDK with Azure ML, you need to first retrieve both the api_url with the /v1 route (that contains the v1/chat/completions endpoint that the OpenAI Python SDK will send requests to), and the api_key which is the primary key generated in Azure ML (unless a dedicated Azure ML Token is used instead), which you can do via the previously instantiated azure.ai.ml.MLClient as it follows:

api_key = client.online_endpoints.get_keys(os.getenv("AML_ENDPOINT_NAME")).primary_key
api_url = client.online_endpoints.get(os.getenv("AML_ENDPOINT_NAME")).scoring_uri.replace("/generate", "/v1")

Alternatively, you can also build the API URL manually as it follows:

api_url = f"https://{os.getenv('AML_ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"
api_url

Or just retrieve it from the Azure ML Studio manually too.

Then you can use the OpenAI Python SDK normally, or in this case the smolagents.OpenAIServerModel class that is an interface for any OpenAI-compatible API. Additionally, for Azure ML we need to make sure to include the extra header azureml-model-deployment that contains the Azure ML Deployment name, to be provided via the client_kwargs as default_headers, to be propagated by smolagents.OpenAIServerModel to the underlying OpenAI client when instantiating it.

%pip install "smolagents[openai]" --upgrade --quiet
from smolagents import OpenAIServerModel

model = OpenAIServerModel(
    model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
    api_base=api_url,
    api_key=api_key,
    client_kwargs={"default_headers": {"azureml-model-deployment": os.getenv("AML_DEPLOYMENT_NAME")}},
)

Build Python Tools

smolagents will be used to build the tools that the agent will leverage, as well as to build the smolagents.CodeAgent itself. The following tools will be defined, using the smolagents.tool decorator, that will prepare the Python functions to be used as tools within the LLM Agent.

Note that the function signatures should come with proper typing so as to guide the LLM, as well as a clear function name and, most importantly, well-formatted docstrings indicating what the function does, what are the arguments, what it returns, and what errors can be raised; if applicable.

In this case, the tools that will be provided to the agent are the following:

In this case for the sake of simplicity, the tools to be used have been ported from https://github.com/huggingface/smolagents/blob/main/examples/multiple_tools.py, so all the credit goes to the original authors and maintainers of the smolagents GitHub repository. Also only the tools for querying the World Time API and the Wikipedia API have been kept, since those have a generous Free Tier that allows anyone to use those without paying or having to create an account / API token.

from smolagents import tool

World Time API - get_time_in_timezone

@tool
def get_time_in_timezone(location: str) -> str:
    """
    Fetches the current time for a given location using the World Time API.
    Args:
        location: The location for which to fetch the current time, formatted as 'Region/City'.
    Returns:
        str: A string indicating the current time in the specified location, or an error message if the request fails.
    Raises:
        requests.exceptions.RequestException: If there is an issue with the HTTP request.
    """
    import requests

    url = f"http://worldtimeapi.org/api/timezone/{location}.json"

    try:
        response = requests.get(url)
        response.raise_for_status()

        data = response.json()
        current_time = data["datetime"]

        return f"The current time in {location} is {current_time}."

    except requests.exceptions.RequestException as e:
        return f"Error fetching time data: {str(e)}"

Wikipedia API - search_wikipedia

@tool
def search_wikipedia(query: str) -> str:
    """
    Fetches a summary of a Wikipedia page for a given query.
    Args:
        query: The search term to look up on Wikipedia.
    Returns:
        str: A summary of the Wikipedia page if successful, or an error message if the request fails.
    Raises:
        requests.exceptions.RequestException: If there is an issue with the HTTP request.
    """
    import requests

    url = f"https://en.wikipedia.org/api/rest_v1/page/summary/{query}"

    try:
        response = requests.get(url)
        response.raise_for_status()

        data = response.json()
        title = data["title"]
        extract = data["extract"]

        return f"Summary for {title}: {extract}"

    except requests.exceptions.RequestException as e:
        return f"Error fetching Wikipedia data: {str(e)}"

Create Agent

Since in this case the deployed LLM on Azure ML is a coding-specific LLM, the agent will be created with smolagents.CodeAgent that adds the relevant prompt and parsing functionality, so as to interpret the LLM outputs as code. Alternatively, one could also use smolagents.ToolCallingAgent which is a tool calling agent, meaning that the given LLM should have tool calling capabilities.

Then, the smolagents.CodeAgent expects both the model and the set of tools that the model has access to, and then via the run method, you can leverage all the potential of the agent in an automatic way, without manual intervention; so that the agent will use the given tools if needed, to answer or comply with your intial request.

from smolagents import CodeAgent

agent = CodeAgent(
    tools=[
        get_time_in_timezone,
        search_wikipedia,
    ],
    model=model,
    stream_outputs=True,
)
agent.run(
    "Could you create a Python function that given the summary of 'What is a Lemur?'"
    " replaces all the ocurrences of the letter E with the letter U (ignore the casing)"
)
# Summary for Lumur: Lumurs aru wut-nosud primatus of thu supurfamily Lumuroidua, dividud into 8 familius and consisting of 15 gunura and around 100 uxisting spucius. Thuy aru undumic to thu island of Madagascar. Most uxisting lumurs aru small, with a pointud snout, largu uyus, and a long tail. Thuy chiufly livu in truus and aru activu at night.
agent.run("What time is in Thailand right now? And what's the time difference with France?")
# 5 hours

Release resources

Once you are done using the Azure ML Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

>> client.online_endpoints.begin_delete(name=os.getenv("AML_ENDPOINT_NAME")).result()
..

Conclusion

Throughout this example you learnt how to deploy an Azure ML Managed Online Endpoint running a model from the Hugging Face Collection in the Azure ML Model Catalog, and leverage it to build agents with smolagents.

If you have any doubt, issue or question about this example, feel free to open an issue and we’ll do our best to help!


📍 Find the complete example on GitHub here!

< > Update on GitHub