Update pipeline example
Browse files
README.md
CHANGED
@@ -29,11 +29,37 @@ other versions on a task that interests you.
|
|
29 |
|
30 |
### How to use
|
31 |
|
32 |
-
|
|
|
|
|
|
|
33 |
```
|
34 |
"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:"
|
35 |
```
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
```python
|
38 |
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
|
39 |
import torch
|
|
|
29 |
|
30 |
### How to use
|
31 |
|
32 |
+
|
33 |
+
Here's the prompt template for this model but we recommend to use chat templates to format the prompt with `processor.apply_chat_template()`.
|
34 |
+
That will apply the correct template for a given checkpoint for you.
|
35 |
+
|
36 |
```
|
37 |
"A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:"
|
38 |
```
|
39 |
+
|
40 |
+
To run the model with the `pipeline`, see the below example:
|
41 |
+
|
42 |
+
```python
|
43 |
+
from transformers import pipeline
|
44 |
+
|
45 |
+
pipe = pipeline("image-text-to-text", model="llava-hf/llava-v1.6-vicuna-7b-hf")
|
46 |
+
messages = [
|
47 |
+
{
|
48 |
+
"role": "user",
|
49 |
+
"content": [
|
50 |
+
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"},
|
51 |
+
{"type": "text", "text": "What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud"},
|
52 |
+
],
|
53 |
+
},
|
54 |
+
]
|
55 |
+
|
56 |
+
out = pipe(text=messages, max_new_tokens=20)
|
57 |
+
print(out)
|
58 |
+
>>> [{'input_text': [{'role': 'user', 'content': [{'type': 'image', 'url': 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg'}, {'type': 'text', 'text': 'What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud'}]}], 'generated_text': 'Lava'}]
|
59 |
+
```
|
60 |
+
|
61 |
+
You can also load and use the model like following:
|
62 |
+
|
63 |
```python
|
64 |
from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
|
65 |
import torch
|