Deploy SmolLM3 on Azure AI

This example showcases how to deploy SmolLM3 from the Hugging Face Collection in Azure AI Foundry Hub as an Azure ML Managed Online Endpoint, powered by Transformers with an OpenAI compatible interface. Additionally, this example also showcases how to run inference with both the Azure ML Python SDK, the OpenAI Python SDK, and even how to locally run a Gradio application for chat completion.

SmolLM3 3B logo image

TL;DR Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training. Azure AI Foundry provides a unified platform for enterprise AI operations, model builders, and application development. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.


This example will specifically deploy HuggingFaceTB/SmolLM3-3B from the Hugging Face Hub (or see it on AzureML or on Azure AI Foundry) as an Azure ML Managed Online Endpoint on Azure AI Foundry Hub.

SmolLM3 is a 3B parameter language model designed to push the boundaries of small models. It supports dual mode reasoning, 6 languages and long context. SmolLM3 is a fully open model that offers strong performance at the 3B–4B scale.

SmolLM3 3B size and performance comparison

The model is a decoder-only transformer using GQA and NoPE (with 3:1 ratio), it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).

Key features

SmolLM3 3B on the Hugging Face Hub

SmolLM3 3B on Azure AI Foundry

For more information, make sure to check their model card on the Hugging Face Hub.

Pre-requisites

To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the Azure Machine Learning Tutorial: Create resources you need to get started.

For more information, please go through the steps in Configure Microsoft Azure for Azure AI.

Setup and installation

In this example, the Azure Machine Learning SDK for Python will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install azure-identity to authenticate with your Azure credentials via Python.

%pip install azure-ai-ml azure-identity --upgrade --quiet

More information at Azure Machine Learning SDK for Python.

Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.

%env LOCATION eastus
%env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID>
%env RESOURCE_GROUP <YOUR_RESOURCE_GROUP>
%env AI_FOUNDRY_HUB_PROJECT <YOUR_AI_FOUNDRY_HUB_PROJECT>

Finally, you also need to define both the endpoint and deployment names, as those will be used throughout the example too:

Note that endpoint names must to be globally unique per region i.e., even if you don’t have any endpoint named that way running under your subscription, if the name is reserved by another Azure customer, then you won’t be able to use the same name. Adding a timestamp or a custom identifier is recommended to prevent running into HTTP 400 validation issues when trying to deploy an endpoint with an already locked / reserved name. Also the endpoint name must be between 3 and 32 characters long.

import os
from uuid import uuid4

os.environ["ENDPOINT_NAME"] = f"smollm3-endpoint-{str(uuid4())[:8]}"
os.environ["DEPLOYMENT_NAME"] = f"smollm3-deployment-{str(uuid4())[:8]}"

Authenticate to Azure ML

Initially, you need to authenticate into the Azure AI Foundry Hub via Azure ML with the Azure ML Python SDK, which will be later used to deploy HuggingFaceTB/SmolLM3-3B as an Azure ML Managed Online Endpoint in your Azure AI Foundry Hub.

On standard Azure ML deployments you’d need to create the MLClient using the Azure ML Workspace as the workspace_name whereas for Azure AI Foundry, you need to provide the Azure AI Foundry Hub name as the workspace_name instead, and that will deploy the endpoint under the Azure AI Foundry too.

import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential

client = MLClient(
    credential=DefaultAzureCredential(),
    subscription_id=os.getenv("SUBSCRIPTION_ID"),
    resource_group_name=os.getenv("RESOURCE_GROUP"),
    workspace_name=os.getenv("AI_FOUNDRY_HUB_PROJECT"),
)

Create and Deploy Azure AI Endpoint

Before creating the Managed Online Endpoint, you need to build the model URI, which is formatted as it follows azureml://registries/<REGISTRY_NAME>/models/<MODEL_ID>/labels/latest (even if the URI contains azureml it’s the same as in Azure AI Foundry, since the model catalog is shared), that means that the REGISTRY_NAME should be set to “HuggingFace” as you intend to deploy a model from the Hugging Face Collection, and the MODEL_ID won’t be the Hugging Face Hub ID, but rather the ID with hyphen replacements for both backslash (/) and underscores (_) with hyphens (-), and then into lower case, as follows:

model_id = "HuggingFaceTB/SmolLM3-3B"

model_uri = (
    f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
)
model_uri

Note that you will need to verify in advance that the URI is valid, and that the given Hugging Face Hub Model ID exists on Azure, since Hugging Face is publishing those models into their collection, meaning that some models may be available on the Hugging Face Hub but not yet on the Azure Model Catalog (you can request adding a model following the guide Request a model addition).

Alternatively, you can use the following snippet to verify if a model is available on the Azure Model Catalog programmatically:

import requests

response = requests.get(f"https://generate-azureml-urls.azurewebsites.net/api/generate?modelId={model_id}")
if response.status_code != 200:
    print("[{response.status_code=}] {model_id=} not available on the Hugging Face Collection in Azure Model Catalog")

As mentioned previously, the Managed Online Endpoint expects a unique name per region. It’s a good practice to add some sort of unique name in case of multi-region deployments. You can set the name via the ManagedOnlineEndpoint Python class.

Also note that by default the ManagedOnlineEndpoint will use the key authentication method, meaning that there will be a primary and secondary key that should be sent within the Authentication headers as a Bearer token; but also the aml_token authentication method can be used, read more about it at Authenticate clients for online endpoints.

The deployment, created via the ManagedOnlineDeployment Python class, will be exposed via the defined endpoint. The ManagedOnlineDeployment expects: the model (previously defined URI), the endpoint_name, and the instance requirements (instance_type and instance_count).

Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in Supported Hardware); in this case, a NVIDIA H100 GPU will be used i.e., Standard_NC40ads_H100_v5.

Since for some models and inference engines you need to run those on a GPU-accelerated instance, you may need to request a quota increase for some of the supported instances as per the model you want to deploy. Also, keep into consideration that each model comes with a list of all the supported instances, being the recommended one for each tier the lower instance in terms of available VRAM. Read more about quota increase requests for Azure ML at Manage and increase quotas and limits for resources with Azure Machine Learning.

from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment

endpoint = ManagedOnlineEndpoint(name=os.getenv("ENDPOINT_NAME"))

deployment = ManagedOnlineDeployment(
    name=os.getenv("DEPLOYMENT_NAME"),
    endpoint_name=os.getenv("ENDPOINT_NAME"),
    model=model_uri,
    instance_type="Standard_NC40ads_H100_v5",
    instance_count=1,
)
client.begin_create_or_update(endpoint).wait()

Azure AI Endpoint from Azure AI Foundry

In Azure AI Foundry the endpoint will only be listed within the “My assets -> Models + endpoints” tab once the deployment is created, not before as in Azure ML where the endpoint is shown even if it doesn’t contain any active or in-progress deployments.

client.online_deployments.begin_create_or_update(deployment).wait()

Azure AI Deployment from Azure AI Foundry

Note that whilst the Azure AI Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could as well take longer depending on the instance provisioning and availability.

Once deployed, via either the Azure AI Foundry or the Azure ML Studio you’ll be able to inspect the endpoint details, the real-time logs, how to consume the endpoint, and even use the, still on preview, monitoring feature.

Find more information about it at Azure ML Managed Online Endpoints

Send requests to the Azure AI Endpoint

Finally, now that the Azure AI Endpoint is deployed, you can send requests to it. In this case, since the task of the model is text-generation (also known as chat-completion) you can either use leverage having an OpenAI-compatible OpenAPI interface and send requests to /v1/chat/completions.

Note that below only some of the options are listed, but you can send requests to the deployed endpoint as long as you send the HTTP requests with the azureml-model-deployment header set to the name of the Azure AI Deployment (not the Endpoint), and have the necessary authentication token / key to send requests to the given endpoint; then you can send HTTP request to all the routes that the backend engine is exposing, not only to the scoring route.

Support for Hugging Face models via azure-ai-inference Python SDK is still a work in progress, but that will be included soon and set as the recommended inference method, stay tuned!

OpenAI Python SDK

With this OpenAI-compatible Transformers interface, you can also leverage the OpenAI Python SDK to send requests to the deployed Azure AI Endpoint.

%pip install openai --upgrade --quiet

To use the OpenAI Python SDK with Azure ML Managed Online Endpoints, you need to first retrieve:

api_key = client.online_endpoints.get_keys(os.getenv("ENDPOINT_NAME")).primary_key
api_url = client.online_endpoints.get(os.getenv("ENDPOINT_NAME")).scoring_uri.replace("/chat/completions", "")

Alternatively, you can also build the API URL manually as it follows, since the URIs are globally unique per region, meaning that there will only be one endpoint named the same way within the same region:

api_url = f"https://{os.getenv('ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"

Or just retrieve it from either the Azure AI Foundry or the Azure ML Studio.

Then you can use the OpenAI Python SDK normally, making sure to include the extra header azureml-model-deployment header that contains the Azure AI / ML Deployment name.

Via the OpenAI Python SDK it can either be set within each call to chat.completions.create via the extra_headers parameter as commented below, or via the default_headers parameter when instantiating the OpenAI client (which is the recommended approach since the header needs to be present on each request, so setting it just once is preferred).

import os
from openai import OpenAI

openai_client = OpenAI(
    base_url=api_url,
    api_key=api_key,
    default_headers={"azureml-model-deployment": os.getenv("DEPLOYMENT_NAME")},
)

Chat completion call

completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {"role": "system", "content": "You are an assistant that responds like a pirate."},
        {
            "role": "user",
            "content": "Give me a brief explanation of gravity in simple terms.",
        },
    ],
    max_tokens=50,
)
print(completion)

Enabling and Disabling Extended Thinking Mode

By default, SmolLM3-3B enables extended thinking, so the example above generates the output with a reasoning trace. For choosing between enabling, you can provide the /think and /no_think flags through the system prompt as shown in the snippet below for extended thinking disabled. The code for generating the response with extended thinking would be the same except that the system prompt should have /think instead of /no_think.

completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {"role": "system", "content": "You are an assistant that responds like a pirate. /no_think"},
        {
            "role": "user",
            "content": "Give me a brief explanation of gravity in simple terms.",
        },
    ],
    max_tokens=50,
)
print(completion)

Multilingual capabilities

As mentioned, SmolLM3-3B has been trained to natively suport 6 languages: English, French, Spanish, German, Italian, and Portuguese.

You can try and leverage its multilingual potential.

completion = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[
        {"role": "system", "content": "You are an expert translation. /no_think"},
        {
            "role": "user",
            "content": "Translate the following English sentence into Spanish and German: The brown cat sat on the mat.",
        },
    ],
    max_tokens=50,
)
print(completion)

Agentic Usage: Tool Calling

SmolLM3-3B supports tool calling. Just pass your list of tools as dictionary objects as follows. You have to specify the name, the description and the parameters so the model can generate the correct tool call.

Remember to set the max_completion_tokens parameter to a realtively high value, since the model will need enough tokens to generate the answer.

response = openai_client.chat.completions.create(
    model="HuggingFaceTB/SmolLM3-3B",
    messages=[{"role": "user", "content": "What is the weather like in New York?"}],
    tools=[
        {
            "type": "function",
            "function": {
                "name": "get_weather",
                "description": "Get the current weather in a given location",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "location": {
                            "type": "string",
                            "description": "The city and state, e.g. San Francisco, CA",
                        },
                        "unit": {
                            "type": "string",
                            "enum": ["celsius", "fahrenheit"],
                            "description": "The unit of temperature",
                        },
                    },
                    "required": ["location"],
                },
            },
        }
    ],
    tool_choice="auto",
    max_completion_tokens=300,
)
print(response)

Release resources

Once you are done using the Azure AI Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.

client.online_endpoints.begin_delete(name=os.getenv("ENDPOINT_NAME")).result()

Conclusion

Throughout this example you learnt how to create and configure your Azure account for Azure ML and Azure AI Foundry, how to then create a Managed Online Endpoint running an open model from the Hugging Face Collection in the Azure ML / Azure AI Foundry model catalog, how to send inference requests with OpenAI SDK, and finally, how to stop and release the resources.

If you have any doubt, issue or question about this example, feel free to open an issue and we’ll do our best to help!


📍 Find the complete example on GitHub here!

< > Update on GitHub