This example showcases how to deploy a Vision Language Model (VLM), i.e., a Large Language Model (LLM) with vision understanding, from the Hugging Face Collection on Azure ML as a Managed Online Endpoint, powered by Hugging Face’s Text Generation Inference (TGI). Additionally, this example also showcases how to run inference with both the Azure Python SDK, OpenAI Python SDK, and even how to locally run a Gradio application for chat completion with images.
Note that this example will go through the Python SDK / Azure CLI programmatic deployment, if you’d rather prefer using the one-click deployment experience, please check One-click Deployment on Azure ML.
TL;DR Text Generation Inference (TGI) is a solution developed by Hugging Face for deploying and serving LLMs and VLMs with high performance text generation. Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project lifecycle.
This example will specifically deploy Qwen/Qwen2.5-VL-32B-Instruct from the Hugging Face Hub (or see Qwen2.5-VL-32B-Instruct page on AzureML) as an Azure ML Managed Online Endpoint, as it’s one of the latest VLMs from Qwen, released after the impact and feedback from the previous Qwen2 VL release, with some key enhancements such as:


For more information, make sure to check their model card on the Hugging Face Hub.
Note that you can select any LLM available on the Hugging Face Hub with the “Deploy to AzureML” option enabled, or directly select any of the LLMs available in the Azure ML Model Catalog under the “HuggingFace” collection.
To run the following example, you will need to comply with the following pre-requisites, alternatively, you can also read more about those in the Azure Machine Learning Tutorial: Create resources you need to get started.
A Microsoft Azure account with an active subscription. If you don’t have a Microsoft Azure account, you can now create one for free, including 200 USD worth of credits to use within the next 30 days after the account creation.
The Azure CLI (az) installed on the instance that you’re running this example on, see the installation steps, and follow the steps of the prefered method based on your instance. Then log in into your subscription as follows:
az login
More information at Sign in with Azure CLI - Login and Authentication.
An Azure Resource Group under the one you will create the Azure ML workspace and the rest of the required resources. If you don’t have one, you can create it as follow:
az group create --name huggingface-azure-rg --location eastus
Then, you can ensure that the resource group was created successfully by e.g. listing all the available resource groups that you have access to on your subscription:
az group list --output table
More information at Manage Azure resource groups by using Azure CLI.
You can also create the Azure Resource Group via the Azure Portal, via the Azure ML Studio when creating the Azure ML Workspace as described below, or via the Azure Resource Management Python SDK (requires it to be installed as pip install azure-mgmt-resource in advance).
An Azure ML workspace under the subscription and resource group aforementioned. If you don’t have one, you can create it as:
az ml workspace create \
--name huggingface-azure-ws \
--resource-group huggingface-azure-rg \
--location eastusThen, you can ensure that the workspace was created successfully by e.g. listing all the available workspaces that you have access to on your subscription:
az ml workspace list --resource-group huggingface-azure-rg --output table
More information at Tutorial: Create resources you need to get started - Create the workspace and find more information about Azure ML Workspace at What is an Azure Machine Learning workspace?.
You can also create the Azure ML Workspace via the Azure ML Studio, via the Azure Portal, or via the Azure ML Python SDK.
In this example, the Azure Machine Learning SDK for Python will be used to create the endpoint and the deployment, as well as to invoke the deployed API. Along with it, you will also need to install azure-identity to authenticate with your Azure credentials via Python.
%pip install azure-ai-ml azure-identity --upgrade --quiet
More information at Azure Machine Learning SDK for Python.
Then, for convenience setting the following environment variables is recommended as those will be used along the example for the Azure ML Client, so make sure to update and set those values accordingly as per your Microsoft Azure account and resources.
%env LOCATION eastus %env SUBSCRIPTION_ID <YOUR_SUBSCRIPTION_ID> %env RESOURCE_GROUP <YOUR_RESOURCE_GROUP> %env AML_WORKSPACE_NAME <YOUR_AML_WORKSPACE_NAME>
Finally, you also need to define both the Azure ML Endpoint and Deployment names, as those will be used throughout the example too (note that those need to be unique per region, so add a timestamp or a region-specific identifier if needed; and between 3 and 32 characters long):
%env AML_ENDPOINT_NAME qwen-vlm-endpoint %env AML_DEPLOYMENT_NAME qwen-vlm-deployment
Initially, you need to authenticate to create a new Azure ML client with your credentials, which will be later used to deploy the Hugging Face model, Qwen/Qwen2.5-VL-32B-Instruct in this case, into an Azure ML Endpoint.
import os
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
client = MLClient(
credential=DefaultAzureCredential(),
subscription_id=os.getenv("SUBSCRIPTION_ID"),
resource_group_name=os.getenv("RESOURCE_GROUP"),
workspace_name=os.getenv("AML_WORKSPACE_NAME"),
)Before creating the Azure ML Endpoint, you need to build the Azure ML Model URI, which is formatted as it follows azureml://registries/<REGISTRY_NAME>/models/<MODEL_ID>/labels/latest, that means that the REGISTRY_NAME should be set to “HuggingFace” as you intend to deploy a model from the Hugging Face Collection on the Azure ML Model Catalog, and the MODEL_ID won’t be the Hugging Face Hub ID, but rather the ID with hyphen replacements for both backslash (/) and underscores (_), as it follows:
model_id = "Qwen/Qwen2.5-VL-32B-Instruct"
model_uri = (
f"azureml://registries/HuggingFace/models/{model_id.replace('/', '-').replace('_', '-').lower()}/labels/latest"
)
model_uriNote that you will need to verify in advance that the URI is valid, and that the given Hugging Face Hub Model ID exists on Azure, since Hugging Face is publishing those models into their collection, meaning that some models may be available on the Hugging Face Hub but not yet on the Azure ML Model Catalog (you can request adding a model following the guide Request a model addition).
Alternatively, you can use the following snippet to verify if a model is available on the Azure ML Model Catalog programmatically:
import requests
response = requests.get(f"https://generate-azureml-urls.azurewebsites.net/api/generate?modelId={model_id}")
if response.status_code != 200:
print(
"[{response.status_code=}] {model_id=} not available on the Hugging Face Collection in Azure ML Model Catalog"
)Then, once the model URI has been built correctly and that the model exists on Azure ML, then you can create the Managed Online Endpoint specifying its name (note that the name must be unique per region, so it’s a nice practice to add some sort of unique name to it in case multi-region deployments are intended) via the ManagedOnlineEndpoint Python class.
Also note that by default the ManagedOnlineEndpoint will use the key authentication method, meaning that there will be a primary and secondary key that should be sent within the Authentication headers as a Bearer token; but also the aml_token authentication method can be used, read more about it at Authenticate clients for online endpoints.
The deployment, created via the ManagedOnlineDeployment Python class, will define the actual model deployment that will be exposed via the previously created endpoint. The ManagedOnlineDeployment will expect: the model i.e., the previously created URI azureml://registries/HuggingFace/models/Qwen-Qwen2.5-VL-32B-Instruct/labels/latest, the endpoint_name, and the instance requirements being the instance_type and the instance_count.
Every model in the Hugging Face Collection is powered by an efficient inference backend, and each of those can run on a wide variety of instance types (as listed in Supported Hardware); in this case, a NVIDIA H100 GPU will be used i.e., Standard_NC40ads_H100_v5.
Since for some models and inference engines you need to run those on a GPU-accelerated instance, you may need to request a quota increase for some of the supported instances as per the model you want to deploy. Also, keep into consideration that each model comes with a list of all the supported instances, being the recommended one for each tier the lower instance in terms of available VRAM. Read more about quota increase requests for Azure ML at Manage and increase quotas and limits for resources with Azure Machine Learning.
from azure.ai.ml.entities import ManagedOnlineEndpoint, ManagedOnlineDeployment
endpoint = ManagedOnlineEndpoint(name=os.getenv("AML_ENDPOINT_NAME"))
deployment = ManagedOnlineDeployment(
name=os.getenv("AML_DEPLOYMENT_NAME"),
endpoint_name=os.getenv("AML_ENDPOINT_NAME"),
model=model_uri,
instance_type="Standard_NC40ads_H100_v5",
instance_count=1,
)client.begin_create_or_update(endpoint).wait()

client.online_deployments.begin_create_or_update(deployment).wait()

Note that whilst the Azure ML Endpoint creation is relatively fast, the deployment will take longer since it needs to allocate the resources on Azure so expect it to take ~10-15 minutes, but it could aswell take longer depending on the instance provisioning and availability.
Once deployed, via the Azure ML Studio you’ll be able to inspect the logs at https://ml.azure.com/endpoints/realtime/qwen-vlm-endpoint/logs, see how to consume the deployed API at https://ml.azure.com/endpoints/realtime/qwen-vlm-endpoint/consume, or check their (on preview) model monitoring feature at https://ml.azure.com/endpoints/realtime/qwen-vlm-endpoint/Monitoring.
If you named your Azure ML Endpoint differently (set via the AML_ENDPOINT_NAME environment variable, you’ll need to update the URLs above as https://ml.azure.com/endpoints/realtime/<AML_ENDPOINT_NAME> for those to work as expected.
More information about the Azure ML Managed Online Endpoints and Deploy and score a machine learning model by using an online endpoint (which can be deployed via the az CLI, the Azure ML SDK for Python as above, from the Azure ML Studio, from the Hugging Face Hub from the given model card, or from an ARM Template).
Finally, now that the Azure ML Endpoint is deployed, you can send requests to it. In this case, since the task of the model is text-generation (also known as chat-completion) you can either use the default scoring endpoint, being /generate which is the standard text generation endpoint without chat capabilities (as leveraging the chat template or having an OpenAI-compatible OpenAPI interface), or alternatively just benefit from the fact that Text Generation Inference (TGI) i.e., the inference engine in which the model is running on top, exposes OpenAI-compatible routes.
Note that below only some of the options are listed, but you can send requests to the deployed endpoint as long as you send the HTTP requests with the azureml-model-deployment header set to the name of the Azure ML Deployment (not the Endpoint), and have the necessary authentication token / key to send requests to the given endpoint; then you can send HTTP request to all the routes that the backend engine is exposing, not only to the scoring route.
You can invoke the Azure ML Endpoint on the scoring route, in this case /generate (more information about it in the Qwen2.5-VL-32B-Instruct page on AzureML), via the Azure Python SDK with the previously instantiated azure.ai.ml.MLClient (or instantiate a new one if working from a different session) as it follows:
Since in this case you are deploying a Vision Language Model (VLM), to leverage the vision capabilities through the /generate endpoint you will need to include either the image URL or the base64 encoding of the image formatted in Markdown as e.g. What is this a picture of?\n\n or What is this a picture of?\n\n.
More information at Vision Language Model Inference in TGI.
import json
import os
import tempfile
with tempfile.NamedTemporaryFile(mode="w+", delete=True, suffix=".json") as tmp:
json.dump(
{
"inputs": "What is this a picture of?\n\n",
"parameters": {"max_new_tokens": 128},
},
tmp,
)
tmp.flush()
response = client.online_endpoints.invoke(
endpoint_name=os.getenv("AML_ENDPOINT_NAME"),
deployment_name=os.getenv("AML_DEPLOYMENT_NAME"),
request_file=tmp.name,
)
print(json.loads(response))Note that the Azure ML Python SDK requires a path to a JSON file when invoking the endpoints, meaning that whatever payload you want to send to the endpoint will need to be first converted into a JSON file, whilst that only applies to the requests sent via the Azure ML Python SDK.
Since Text Generation Inference (TGI) also exposes OpenAI-compatible routes, you can also leverage the OpenAI Python SDK to send requests to the deployed Azure ML Endpoint.
%pip install openai --upgrade --quiet
To use the OpenAI Python SDK with Azure ML, you need to first retrieve both the api_url with the /v1 route (that contains the v1/chat/completions endpoint that the OpenAI Python SDK will send requests to), and the api_key which is the primary key generated in Azure ML (unless a dedicated Azure ML Token is used instead), which you can do via the previously instantiated azure.ai.ml.MLClient as it follows:
api_key = client.online_endpoints.get_keys(os.getenv("AML_ENDPOINT_NAME")).primary_key
api_url = client.online_endpoints.get(os.getenv("AML_ENDPOINT_NAME")).scoring_uri.replace("/generate", "/v1")Alternatively, you can also build the API URL manually as it follows:
api_url = f"https://{os.getenv('AML_ENDPOINT_NAME')}.{os.getenv('LOCATION')}.inference.ml.azure.com/v1"
api_urlOr just retrieve it from the Azure ML Studio manually too.
Then you can use the OpenAI Python SDK normally, making sure to include the extra headers required by Azure ML, being the azureml-model-deployment header that contains the Azure ML Deployment name, that can either be set within each call to chat.completions.create via the extra_headers parameter as commented below, or also via the default_headers parameter when instantiating the OpenAI client (which is the recommended approach since the header needs to always be present, so setting it once is preferred).
import os
from openai import OpenAI
openai_client = OpenAI(
base_url=api_url,
api_key=api_key,
default_headers={"azureml-model-deployment": os.getenv("AML_DEPLOYMENT_NAME")},
)
completion = openai_client.chat.completions.create(
model="Qwen/Qwen2.5-VL-32B-Instruct",
messages=[
{"role": "system", "content": "You are an assistant that responds like a pirate."},
{
"role": "user",
"content": [
{"type": "text", "text": "What is in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"
},
},
],
},
],
max_tokens=128,
# extra_headers={"azureml-model-deployment": os.getenv("AML_DEPLOYMENT_NAME")},
)
print(completion)Alternatively, you can also just use cURL to send requests to the deployed endpoint, with the api_url and api_key values programmatically retrieved in the OpenAI snippet and now set as environment variables so that cURL can use those, as it follows:
os.environ["API_URL"] = api_url
os.environ["API_KEY"] = api_key!curl -sS $API_URL/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-H "azureml-model-deployment: $AML_DEPLOYMENT_NAME" \
-d '{ \
"messages":[ \
{"role":"system","content":"You are an assistant that replies like a pirate."}, \
{"role":"user","content": [ \
{"type":"text","text":"What is in this image?"}, \
{"type":"image_url","image_url":{"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"}} \
]} \
], \
"max_tokens":128 \
}' | jqYou can also just go to the Azure ML endpoint in the Azure ML Studio and retrieve both the URL (note that it will default to the /generate endpoint, but to use the OpenAI-compatible layer you need to use the /v1/chat/completions endpoint instead) and the API Key values, as well as the Azure ML Model Deployment name for the given model, and then send the request as follows after replacing the values from Azure ML:
curl -sS <API_URL>/v1/chat/completions \
-H "Authorization: Bearer <PRIMARY_KEY>" \
-H "Content-Type: application/json" \
-H "azureml-model-deployment: $AML_DEPLOYMENT_NAME" \
-d '{ \
"messages":[ \
{"role":"system","content":"You are an assistant that replies like a pirate."}, \
{"role":"user","content": [ \
{"type":"text","text":"What is in this image?"}, \
{"type":"image_url","image_url":{"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"}} \
]} \
], \
"max_tokens":128 \
}' | jqGradio is the fastest way to demo your machine learning model with a friendly web interface so that anyone can use it. You can also leverage the OpenAI Python SDK to build a simple multimodal (text and images) ChatInterface that you can use within the Jupyter Notebook cell where you are running it.
Ideally you could deploy the Gradio Chat Interface connected to your Azure ML Managed Online Endpoint as an Azure Container App as described in Tutorial: Build and deploy from source code to Azure Container Apps. If you’d like us to show you how to do it for Gradio in particular, feel free to open an issue requesting it.
%pip install gradio --upgrade --quiet
See below an example on how to leverage Gradio’s ChatInterface, or find more information about it at Gradio ChatInterface Docs.
import os
import base64
from typing import Dict, Iterator, List, Literal
import gradio as gr
from openai import OpenAI
openai_client = OpenAI(
base_url=os.getenv("API_URL"),
api_key=os.getenv("API_KEY"),
default_headers={"azureml-model-deployment": os.getenv("AML_DEPLOYMENT_NAME")},
)
def predict(
message: Dict[str, str | List[str]], history: List[Dict[Literal["role", "content"], str]]
) -> Iterator[str]:
content = []
if message["text"]:
content.append({"type": "text", "text": message["text"]})
for file_path in message.get("files", []):
with open(file_path, "rb") as image_file:
base64_image = base64.b64encode(image_file.read()).decode("utf-8")
content.append(
{
"type": "image_url",
"image_url": {"url": f"data:image/png;base64,{base64_image}"},
}
)
messages = history.copy()
messages.append({"role": "user", "content": content})
stream = openai_client.chat.completions.create(
model="Qwen/Qwen2.5-VL-32B-Instruct",
messages=messages,
stream=True,
)
buffer = ""
for chunk in stream:
if chunk.choices[0].delta.content:
buffer += chunk.choices[0].delta.content
yield buffer
demo = gr.ChatInterface(
predict,
textbox=gr.MultimodalTextbox(label="Input", file_types=[".jpg", ".png", ".jpeg"], file_count="multiple"),
multimodal=True,
type="messages",
)
demo.launch()
Once you are done using the Azure ML Endpoint / Deployment, you can delete the resources as it follows, meaning that you will stop paying for the instance on which the model is running and all the attached costs will be stopped.
client.online_endpoints.begin_delete(name=os.getenv("AML_ENDPOINT_NAME")).result()Throughout this example you learnt how to create and configure your Azure account for Azure ML, how to then create an Azure ML Managed Online Endpoint running a model from the Hugging Face Collection in the Azure ML Model Catalog, how to send inference requests to it afterwards with different approaches, as well as how to stop the resources.
If you have any doubt, issue or question about this example, feel free to open an issue and we’ll do our best to help!
📍 Find the complete example on GitHub here!