add comments
Browse files
blog/openvino_vlm/openvino-vlm.md
CHANGED
@@ -90,7 +90,7 @@ Weight-only quantization means that only the weights are being quantized and lea
|
|
90 |
|
91 |
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
92 |
|
93 |
-
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited
|
94 |
|
95 |
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
96 |
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
@@ -98,23 +98,39 @@ In order to run it, you will need to create a quantization configuration using O
|
|
98 |
|
99 |
```python
|
100 |
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig
|
|
|
101 |
q_config = OVWeightQuantizationConfig(bits=8)
|
102 |
# Apply quantization and save the new model
|
103 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
104 |
q_model.save_pretrained("smolvlm_int8")
|
105 |
```
|
106 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
107 |
## Option 2: Static Quantization
|
108 |
|
109 |
When applying static quantization, quantization is applied on both weights and activations. For this a calibration step is needed in which a dataset subset is used in order to estimate the activations ranges. In the following example we are using 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) to perform this calibration step.
|
110 |
|
111 |
```python
|
112 |
from optimum.intel import OVModelForVisualCausalLM, OVQuantizationConfig
|
|
|
113 |
q_config = OVQuantizationConfig(bits=8, dataset="contextual", num_samples=50)
|
114 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
115 |
q_model.save_pretrained("smolvlm_static_int8")
|
116 |
```
|
117 |
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|
119 |
|
120 |
### Step 3: Run inference
|
|
|
90 |
|
91 |
However, the “interactions” during the trip, like drinking water, remain unchanged. This is similar to what happens to activations, which stay in high precision (FP32 or BF16) to preserve accuracy during computation.
|
92 |
|
93 |
+
As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited. Since OpenVINO 2024.3, if the model's weight have been quantized, the corresponding activations will also be quantized at runtime, leading to additional speedup.
|
94 |
|
95 |
Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
|
96 |
In order to run it, you will need to create a quantization configuration using Optimum \`OVWeightQuantizationConfig\` as follows
|
|
|
98 |
|
99 |
```python
|
100 |
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig
|
101 |
+
|
102 |
q_config = OVWeightQuantizationConfig(bits=8)
|
103 |
# Apply quantization and save the new model
|
104 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
105 |
q_model.save_pretrained("smolvlm_int8")
|
106 |
```
|
107 |
|
108 |
+
or quivalently using the CLI:
|
109 |
+
|
110 |
+
|
111 |
+
```bash
|
112 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM-256M-Instruct --weight-format int8 smolvlm_int8/
|
113 |
+
|
114 |
+
```
|
115 |
+
|
116 |
## Option 2: Static Quantization
|
117 |
|
118 |
When applying static quantization, quantization is applied on both weights and activations. For this a calibration step is needed in which a dataset subset is used in order to estimate the activations ranges. In the following example we are using 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) to perform this calibration step.
|
119 |
|
120 |
```python
|
121 |
from optimum.intel import OVModelForVisualCausalLM, OVQuantizationConfig
|
122 |
+
|
123 |
q_config = OVQuantizationConfig(bits=8, dataset="contextual", num_samples=50)
|
124 |
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
|
125 |
q_model.save_pretrained("smolvlm_static_int8")
|
126 |
```
|
127 |
|
128 |
+
or quivalently using the CLI:
|
129 |
+
|
130 |
+
```bash
|
131 |
+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM-256M-Instruct --weight-format int8 --dataset contextual --num-samples 50 smolvlm_static_int8/
|
132 |
+
```
|
133 |
+
|
134 |
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
|
135 |
|
136 |
### Step 3: Run inference
|