Updates
#6
by
ezelanza
- opened
blog/openvino_vlm/openvino-vlm.md
CHANGED
@@ -139,16 +139,24 @@ Try the complete notebook [here](https://github.com/huggingface/optimum-intel/bl
|
|
139 |
|
140 |
## Conclusion
|
141 |
|
142 |
-
Multimodal AI is becoming more accessible thanks to smaller, optimized models like
|
|
|
143 |
|
144 |
-
|
145 |
|
146 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
147 |
|
|
|
148 |
## Useful Links & Resources
|
149 |
- [Notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb)
|
150 |
- [Try our Space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino)
|
151 |
-
- Watch the webinar recording
|
152 |
- [Optimum Intel Documentation](https://huggingface.co/docs/optimum-intel/en/openvino/inference)
|
153 |
|
154 |
#### Notices and Disclaimers
|
|
|
139 |
|
140 |
## Conclusion
|
141 |
|
142 |
+
Multimodal AI is becoming more accessible thanks to smaller, optimized models like SmolVLM and tools such as Hugging Face Optimum and OpenVINO. While deploying vision-language models locally still presents challenges, this workflow shows that it's possible to run lightweight image-and-text models on multiple hardware.
|
143 |
+
To have an idea of how it performs on different Intel hardware, we will be providing an image. In this benchmark, we will be using the SmolVLM2-256M model with weight-only quantization. The input will be an image of a flower with a bee on it, and we will be testing how the model processes the image and answers the question: “What is on the flower?”. We will measure the model size to know how much space the model occupies, the average latency to see how long it takes to process the image and generate an answer, the images per second to understand how quickly the model can handle such images, and the tokens per second to see how fast it can produce the text of its response (A bee is on the flower). The token throughput is reported for the first token, showing how quickly the model starts generating a response, and for the second token, showing how efficiently it continues generating the rest of the answer.
|
144 |
|
145 |
+
Here are the results across different Intel hardware:
|
146 |
|
147 |
+
| Device | Model Size (MB) (Before/After) | Images Throughput (im/s) (Before/After) | First Token Throughput (t/s) (Before/After) | Second Token Throughput (t/s) (Before/After) | Latency (s) (Before/After) |
|
148 |
+
|-------------|-------------------------------|-----------------------------------------|--------------------------------------------|---------------------------------------------|-----------------------------|
|
149 |
+
| CPU | - | 0.33 / 0.55 | 2.69 / 3.94 | 83.25 / 146.1 | 3.5249 / 2.1548 |
|
150 |
+
| iGPU | - | 0.58 / 0.53 | 5.01 / 5.26 | 51.62 / 49.56 | 2.1386 / 2.3182 |
|
151 |
+
| GPU (b580) | 980.61 / 248 (Applies to all devices) | 15.75 / 15.01 | 34.51 / 27.54 | 149.79 / 120.91 | 0.2074 / 0.2376 |
|
152 |
+
| GPU (A770) | - | 10.68 / 10.89 | 16.57 / 15.79 | 83.01 / 69.1 | 0.3321 / 0.3403 |
|
153 |
+
| NPU | - | - | - | - | - |
|
154 |
|
155 |
+
This benchmark demonstrates that smaller, optimized multimodal models like SmolVLM2-256M can run effectively across a range of Intel hardware. Weight-only quantization significantly reduces model size, improving efficiency without majorly impacting throughput. GPUs deliver the highest image and token processing speeds, while CPUs and iGPUs remain viable for lighter workloads. Overall, this shows that lightweight vision-language models can be deployed locally with reasonable performance, making multimodal AI more accessible.
|
156 |
## Useful Links & Resources
|
157 |
- [Notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb)
|
158 |
- [Try our Space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino)
|
159 |
+
- [Watch the webinar recording](https://web.cvent.com/event/d550a2a7-04f2-4a28-b641-3af228e318ca/regProcessStep1?utm_campaign=speakers4&utm_medium=organic&utm_source=Community)
|
160 |
- [Optimum Intel Documentation](https://huggingface.co/docs/optimum-intel/en/openvino/inference)
|
161 |
|
162 |
#### Notices and Disclaimers
|