update for chat template
Browse files
README.md
CHANGED
@@ -176,6 +176,27 @@ out = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_to
|
|
176 |
print(out)
|
177 |
```
|
178 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
179 |
### Model optimization
|
180 |
|
181 |
#### 4-bit quantization through `bitsandbytes` library
|
|
|
176 |
print(out)
|
177 |
```
|
178 |
|
179 |
+
-----------
|
180 |
+
From transformers>=v4.48, you can also pass image/video url or local path to the conversation history, and let the chat template handle the rest.
|
181 |
+
For video you also need to indicate how many `num_frames` to sample from video, otherwise the whole video will be loaded.
|
182 |
+
Chat template will load the image/video for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`.
|
183 |
+
|
184 |
+
```python
|
185 |
+
messages = [
|
186 |
+
{
|
187 |
+
"role": "user",
|
188 |
+
"content": [
|
189 |
+
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
|
190 |
+
{"type": "video", "path": "my_video.mp4"},
|
191 |
+
{"type": "text", "text": "What is shown in this image and video?"},
|
192 |
+
],
|
193 |
+
},
|
194 |
+
]
|
195 |
+
|
196 |
+
inputs = processor.apply_chat_template(messages, num_frames=8, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
|
197 |
+
output = model.generate(**inputs, max_new_tokens=50)
|
198 |
+
```
|
199 |
+
|
200 |
### Model optimization
|
201 |
|
202 |
#### 4-bit quantization through `bitsandbytes` library
|