add-blogpost
#3
by
echarlaix
HF Staff
- opened
blog/openvino_vlm/openvino-vlm.md
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Get Your Small Multimodal AI Model Running in 3 Simple Steps
|
2 |
+
|
3 |
+
Teaser: Run a Vision Language Model (VLM) locally in three steps, no need for expensive cloud infrastructure or high-end compute devices. SmolVLM + Intel Optimum + OpenVINO makes it possible to accelerate on an iGPU or an NPU.
|
4 |
+
|
5 |
+
As large language models (LLMs) and chatbots become more capable, AI is moving beyond text; it's now interpreting images and videos as well. This is where Vision Language Models (VLMs) come in, enabling tasks like describing scenes, generating captions, or answering questions about images.
|
6 |
+
|
7 |
+
Early models like [Flamingo](https://arxiv.org/abs/2204.14198) and [Idefics](https://huggingface.co/blog/idefics) showed what was possible, both demonstrated capabilities with 80B parameters. More recently, we’ve seen small models emerge, [PaliGemma 3B](https://huggingface.co/google/paligemma-3b-pt-896?utm_source=chatgpt.com) , [moondream2](https://www.analyticsvidhya.com/blog/2024/03/introducing-moondream2-a-tiny-vision-language-model/?utm_source=chatgpt.com) , and [Qwen2-VL models](https://nodeshift.com/blog/how-to-install-qwen2-5-vl-7b-instruct-locally?utm_source=chatgpt.com) but even these “small” versions can be tough to run locally because they still carry a lot of the memory and compute demands from their larger predecessors.
|
8 |
+
|
9 |
+
That’s why running AI models locally is still a challenge, but also a huge opportunity. Local inference keeps your data private, gives you fast responses without internet latency, avoids cloud costs, and lets you run and tweak models offline, with full control.
|
10 |
+
|
11 |
+
That’s where tools like Intel [Hugging Face Optimum](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide/llm-inference-hf.html), OpenVINO, and the lightweight [SmolVLM](https://huggingface.co/blog/smolvlm) model come in. In this post, we’ll show you how to get a VLM running locally in just three simple steps, with no expensive hardware or GPUs needed (though it can also run on Intel GPUs).
|
12 |
+
|
13 |
+
## What is a VLM
|
14 |
+
|
15 |
+
Let’s first recap: A Vision Language Model (VLM) can understand both text and images. Instead of just reading or writing text, it can also “see” pictures, so you can ask it to describe a photo, answer a question about an image, or generate a caption. It’s like giving your LLM eyes.
|
16 |
+
|
17 |
+
<figure class="image-gallery">
|
18 |
+
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat1.png">
|
19 |
+
</figure>
|
20 |
+
|
21 |
+
<figure class="image text-center">
|
22 |
+
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat2.png">
|
23 |
+
</figure>
|
24 |
+
|
25 |
+
<figure class="image text-center">
|
26 |
+
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/chat3.png">
|
27 |
+
</figure>
|
28 |
+
|
29 |
+
It’s impressive, but not exactly accessible to use. Let’s take [CogVLM](https://github.com/THUDM/CogVLM), for example, it is a powerful open source vision-language model with around 17 billion parameters (10B vision encoder \+ 7B language model) which can require [about 80GB of RAM](https://inference.roboflow.com/foundation/cogvlm/) to run the model in full precision. Inference is still relatively slow: captioning a single image takes 10 to 13 seconds on an NVIDIA T4 GPU ([RoboflowBenchmark](https://inference.roboflow.com/foundation/cogvlm/?utm_source=chatgpt.com)). Users attempting to run CogVLM on CPUs have reported crashes or memory errors even with 64 GB of RAM, highlighting its impracticality for typical local deployment ([GitHub Issue](https://github.com/THUDM/CogVLM/issues/162)), just to mention one model, this is the challenge faced recently with most small VLMs.
|
30 |
+
|
31 |
+
In contrast, SmolVLM is purpose-built for low-resource environments, and it becomes a highly efficient solution for deploying vision-language models on laptops or edge devices.
|
32 |
+
Launched by Hugging Face in July 2024, SmolVLM addresses the growing need for multimodal AI that runs locally without requiring high-end GPUs or cloud infrastructure. As vision-language models become essential in areas like accessibility, robotics, and on-device assistants, SmolVLM offers a path to efficient, privacy-preserving inference at the edge.
|
33 |
+
Architecturally, SmolVLM pairs a lightweight vision encoder with a compact language decoder. This modular design enables it to interpret both images and text.
|
34 |
+
|
35 |
+
<figure class="image text-center">
|
36 |
+
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/smolvlm.png">
|
37 |
+
<figcaption> SmolVLM architecture (<b><i>Source: <a href="https://huggingface.co/blog/smolvlm#what-is-smolvlm">SmolVLM - small yet mighty Vision Language Model</i></b></a>).
|
38 |
+
</figcaption>
|
39 |
+
</figure>
|
40 |
+
|
41 |
+
It offers a lightweight, efficient solution for running image-and-text models directly on laptops or edge devices.
|
42 |
+
|
43 |
+
## Hugging Face Optimum
|
44 |
+
|
45 |
+
As mentioned, SmolVLM offers a strong advantage for running multimodal models efficiently, but there’s still room for improvement. These models can be further compressed or optimized to run even more effectively on local devices. If you’ve tried optimizing a model by yourself, you probably know it’s not a trivial task.
|
46 |
+
This is where [Optimum Intel for OpenVINO](https://huggingface.co/docs/optimum-intel/en/index) ([repo](https://github.com/huggingface/optimum-intel)) comes in.
|
47 |
+
It acts as a bridge between Hugging Face libraries like [**Transformers**](https://huggingface.co/docs/transformers/en/index)**, [Diffusers](https://huggingface.co/docs/diffusers/index), [timm](https://huggingface.co/docs/timm/index), and [sentence-transformers](https://huggingface.co/sentence-transformers)**, and Intel’s optimization tools, making it easy to accelerate end-to-end pipelines on Intel hardware
|
48 |
+
|
49 |
+
Before using it the very first step is to install the library.
|
50 |
+
```bash
|
51 |
+
pip install optimum-intel[openvino]
|
52 |
+
```
|
53 |
+
|
54 |
+
By using Optimum with OpenVINO, you gain several benefits, like improving the inference time and lower memory/storage usage out of the box. But you can go even further: quantization can reduce the model size and resource consumption even more. While quantization often requires deep expertise, Optimum simplifies the process, making it much more accessible.
|
55 |
+
|
56 |
+
Let’s see how you can run SmolVLM then.
|
57 |
+
|
58 |
+
## Step 1: Convert your model to the OpenVINO IR
|
59 |
+
|
60 |
+
First, you will need to convert your model to the OpenVINO IR. There are multiple options to do it:
|
61 |
+
|
62 |
+
1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)
|
63 |
+
|
64 |
+
```bash
|
65 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM-256M-Instruct smolvlm_ov/
|
66 |
+
```
|
67 |
+
|
68 |
+
2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:
|
69 |
+
|
70 |
+
|
71 |
+
```python
|
72 |
+
from optimum.intel import OVModelForVisualCausalLM
|
73 |
+
model_id = "HuggingFaceTB/SmolVLM-256M-Instruct"
|
74 |
+
model = OVModelForVisualCausalLM.from_pretrained(model_id)
|
75 |
+
model.save_pretrained("smolvlm_ov")
|
76 |
+
```
|
77 |
+
|
78 |
+
## Step 2: Quantization
|
79 |
+
|
80 |
+
Now it’s time to optimize the model for efficient execution using **quantization**. Quantization reduces the precision of the model weights and/or activations, leading to smaller, faster models.
|
81 |
+
|
82 |
+
Essentially, it's a way to map values from a high-precision data type, such as 32-bit floating-point numbers (FP32), to a lower-precision format, typically 8-bit integers (INT8). While this process offers several key benefits, it can also impact in a potential loss of accuracy.
|
83 |
+
|
84 |
+
<figure class="image text-center">
|
85 |
+
<img src="https://huggingface.co/datasets/openvino/documentation/resolve/main/blog/openvino_vlm/quantization.png">
|
86 |
+
</figure>
|
87 |
+
|
88 |
+
Optimum supports two main post-training quantization:
|
89 |
+
|
90 |
+
- Weight Only Quantization
|
91 |
+
- Static Quantization
|
92 |
+
|
93 |
+
Let’s explore each of them.
|
94 |
+
|
95 |
+
### Option 1: Weight Only Quantization
|
96 |
+
|
97 |
+
Weight-only quantization means that only the weights are being quantized and leaving the activation in their original precisions. To explain this process, let’s imagine preparing for a long backpacking trip. To reduce weight, you replace bulky items like full-size shampoo bottles with compact travel-sized versions. This is like weight-only quantization, where the model’s weights are compressed from 32-bit floating-point numbers to 8-bit integers, reducing the model’s memory footprint.
|
98 |
+
|
99 |
+
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
100 |
+
|
101 |
+
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited unless the workload is memory-bound (memory-bound is when its performance is limited mainly by the speed of reading from or writing to memory, rather than by the processor’s computing power).
|
102 |
+
|
103 |
+
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
104 |
+
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
105 |
+
|
106 |
+
|
107 |
+
```python
|
108 |
+
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig
|
109 |
+
q_config = OVWeightQuantizationConfig(bits=8)
|
110 |
+
# Apply quantization and save the new model
|
111 |
+
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
112 |
+
q_model.save_pretrained("smolvlm_int8")
|
113 |
+
```
|
114 |
+
|
115 |
+
## Option 2: Static Quantization
|
116 |
+
|
117 |
+
When applying static quantization, quantization is applied on both weights and activations. For this a calibration step is needed in which a dataset subset is used in order to estimate the activations ranges. In the following example we are using 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) to perform this calibration step.
|
118 |
+
|
119 |
+
```python
|
120 |
+
from optimum.intel import OVModelForVisualCausalLM, OVQuantizationConfig
|
121 |
+
q_config = OVQuantizationConfig(bits=8, dataset="contextual", num_samples=50)
|
122 |
+
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
123 |
+
q_model.save_pretrained("smolvlm_static_int8")
|
124 |
+
```
|
125 |
+
|
126 |
+
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|
127 |
+
|
128 |
+
### Step 3: Run inference
|
129 |
+
|
130 |
+
You can now run inference with your quantized model :
|
131 |
+
|
132 |
+
```python
|
133 |
+
# Generate outputs with quantized model
|
134 |
+
generated_ids = q_model.generate(**inputs, max_new_tokens=500)
|
135 |
+
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
136 |
+
print(generated_texts[0])
|
137 |
+
```
|
138 |
+
Try the complete notebook [here](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
|
139 |
+
|
140 |
+
## Conclusion
|
141 |
+
|
142 |
+
Multimodal AI is becoming more accessible thanks to smaller, optimized models like **SmolVLM**, along with tools such as **Hugging Face Optimum** and **OpenVINO**. While deploying vision-language models locally still comes with challenges, this workflow shows that it's possible to run lightweight image-and-text models on modest hardware.
|
143 |
+
|
144 |
+
By combining quantization techniques with OpenVINO's inference engine, you can reduce memory and compute requirements significantly, making local deployment feasible for a wide range of applications. Whether you're experimenting, prototyping, or looking to deploy offline, this setup gives you a practical starting point.
|
145 |
+
|
146 |
+
As models and tooling continue to improve, so will the ability to run powerful multimodal systems without relying on the cloud.
|
147 |
+
|
148 |
+
## Useful Links & Resources
|
149 |
+
- [Notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb)
|
150 |
+
- [Try our Space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino)
|
151 |
+
- Watch the webinar recording
|
152 |
+
- [Optimum Intel Documentation](https://huggingface.co/docs/optimum-intel/en/openvino/inference)
|
153 |
+
|
154 |
+
#### Notices and Disclaimers
|
155 |
+
Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
|
156 |
+
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation.
|
157 |
+
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
|
158 |
+
|