chenjoya commited on
Commit
3a1a3d1
·
verified ·
1 Parent(s): 9b7f248

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -410
README.md CHANGED
@@ -11,413 +11,4 @@ tags:
11
  - streaming
12
  ---
13
 
14
- # Live-CC-7B-Base
15
-
16
- ## Introduction
17
-
18
- We introduce LiveCC, the first multimodal LLM with real-time video commentary capability, and also strong at general image/video tasks.
19
-
20
- - Project Page: https://showlab.github.io/livecc
21
-
22
- > [!Important]
23
- > This is the base model, pre-trained on [Live-CC-5M](https://huggingface.co/datasets/chenjoya/Live-CC-5M) dataset only with our proposed streaming frame-words paradigm. The instruction tuned model is [LiveCC-7B-Instruct](https://huggingface.co/chenjoya/LiveCC-7B-Instruct).
24
-
25
- ## Training with Streaming Frame-Words Paradigm
26
-
27
-
28
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642435a1a3adbc7142c3b0a6/T-Zs50VlFT2tE7RdV49TE.png)
29
-
30
- ## Quickstart
31
- Like qwen-vl-utils, we offer a toolkit to help you handle various types of visual input more conveniently, **especially on video streaming inputs**. You can install it using the following command:
32
-
33
- ```bash
34
- pip install qwen-vl-utils livecc-utils
35
- ```
36
-
37
- Here we show a code snippet to show you how to use the chat model with `transformers` and the above utils:
38
-
39
- ```python
40
- from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
41
- from qwen_vl_utils import process_vision_info
42
-
43
- # default: Load the model on the available device(s)
44
- model = Qwen2VLForConditionalGeneration.from_pretrained(
45
- "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
46
- )
47
-
48
- # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
49
- # model = Qwen2VLForConditionalGeneration.from_pretrained(
50
- # "Qwen/Qwen2-VL-7B-Instruct",
51
- # torch_dtype=torch.bfloat16,
52
- # attn_implementation="flash_attention_2",
53
- # device_map="auto",
54
- # )
55
-
56
- # default processer
57
- processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
58
-
59
- # The default range for the number of visual tokens per image in the model is 4-16384. You can set min_pixels and max_pixels according to your needs, such as a token count range of 256-1280, to balance speed and memory usage.
60
- # min_pixels = 256*28*28
61
- # max_pixels = 1280*28*28
62
- # processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)
63
-
64
- messages = [
65
- {
66
- "role": "user",
67
- "content": [
68
- {
69
- "type": "image",
70
- "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
71
- },
72
- {"type": "text", "text": "Describe this image."},
73
- ],
74
- }
75
- ]
76
-
77
- # Preparation for inference
78
- text = processor.apply_chat_template(
79
- messages, tokenize=False, add_generation_prompt=True
80
- )
81
- image_inputs, video_inputs = process_vision_info(messages)
82
- inputs = processor(
83
- text=[text],
84
- images=image_inputs,
85
- videos=video_inputs,
86
- padding=True,
87
- return_tensors="pt",
88
- )
89
- inputs = inputs.to("cuda")
90
-
91
- # Inference: Generation of the output
92
- generated_ids = model.generate(**inputs, max_new_tokens=128)
93
- generated_ids_trimmed = [
94
- out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
95
- ]
96
- output_text = processor.batch_decode(
97
- generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
98
- )
99
- print(output_text)
100
- ```
101
- <details>
102
- <summary>Without qwen_vl_utils</summary>
103
-
104
- ```python
105
- from PIL import Image
106
- import requests
107
- import torch
108
- from torchvision import io
109
- from typing import Dict
110
- from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
111
-
112
- # Load the model in half-precision on the available device(s)
113
- model = Qwen2VLForConditionalGeneration.from_pretrained(
114
- "Qwen/Qwen2-VL-7B-Instruct", torch_dtype="auto", device_map="auto"
115
- )
116
- processor = AutoProcessor.from_pretrained("Qwen/Qwen2-VL-7B-Instruct")
117
-
118
- # Image
119
- url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
120
- image = Image.open(requests.get(url, stream=True).raw)
121
-
122
- conversation = [
123
- {
124
- "role": "user",
125
- "content": [
126
- {
127
- "type": "image",
128
- },
129
- {"type": "text", "text": "Describe this image."},
130
- ],
131
- }
132
- ]
133
-
134
-
135
- # Preprocess the inputs
136
- text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
137
- # Excepted output: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Describe this image.<|im_end|>\n<|im_start|>assistant\n'
138
-
139
- inputs = processor(
140
- text=[text_prompt], images=[image], padding=True, return_tensors="pt"
141
- )
142
- inputs = inputs.to("cuda")
143
-
144
- # Inference: Generation of the output
145
- output_ids = model.generate(**inputs, max_new_tokens=128)
146
- generated_ids = [
147
- output_ids[len(input_ids) :]
148
- for input_ids, output_ids in zip(inputs.input_ids, output_ids)
149
- ]
150
- output_text = processor.batch_decode(
151
- generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True
152
- )
153
- print(output_text)
154
- ```
155
- </details>
156
- <details>
157
- <summary>Multi image inference</summary>
158
-
159
- ```python
160
- # Messages containing multiple images and a text query
161
- messages = [
162
- {
163
- "role": "user",
164
- "content": [
165
- {"type": "image", "image": "file:///path/to/image1.jpg"},
166
- {"type": "image", "image": "file:///path/to/image2.jpg"},
167
- {"type": "text", "text": "Identify the similarities between these images."},
168
- ],
169
- }
170
- ]
171
-
172
- # Preparation for inference
173
- text = processor.apply_chat_template(
174
- messages, tokenize=False, add_generation_prompt=True
175
- )
176
- image_inputs, video_inputs = process_vision_info(messages)
177
- inputs = processor(
178
- text=[text],
179
- images=image_inputs,
180
- videos=video_inputs,
181
- padding=True,
182
- return_tensors="pt",
183
- )
184
- inputs = inputs.to("cuda")
185
-
186
- # Inference
187
- generated_ids = model.generate(**inputs, max_new_tokens=128)
188
- generated_ids_trimmed = [
189
- out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
190
- ]
191
- output_text = processor.batch_decode(
192
- generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
193
- )
194
- print(output_text)
195
- ```
196
- </details>
197
-
198
- <details>
199
- <summary>Video inference</summary>
200
-
201
- ```python
202
- # Messages containing a images list as a video and a text query
203
- messages = [
204
- {
205
- "role": "user",
206
- "content": [
207
- {
208
- "type": "video",
209
- "video": [
210
- "file:///path/to/frame1.jpg",
211
- "file:///path/to/frame2.jpg",
212
- "file:///path/to/frame3.jpg",
213
- "file:///path/to/frame4.jpg",
214
- ],
215
- "fps": 1.0,
216
- },
217
- {"type": "text", "text": "Describe this video."},
218
- ],
219
- }
220
- ]
221
- # Messages containing a video and a text query
222
- messages = [
223
- {
224
- "role": "user",
225
- "content": [
226
- {
227
- "type": "video",
228
- "video": "file:///path/to/video1.mp4",
229
- "max_pixels": 360 * 420,
230
- "fps": 1.0,
231
- },
232
- {"type": "text", "text": "Describe this video."},
233
- ],
234
- }
235
- ]
236
-
237
- # Preparation for inference
238
- text = processor.apply_chat_template(
239
- messages, tokenize=False, add_generation_prompt=True
240
- )
241
- image_inputs, video_inputs = process_vision_info(messages)
242
- inputs = processor(
243
- text=[text],
244
- images=image_inputs,
245
- videos=video_inputs,
246
- padding=True,
247
- return_tensors="pt",
248
- )
249
- inputs = inputs.to("cuda")
250
-
251
- # Inference
252
- generated_ids = model.generate(**inputs, max_new_tokens=128)
253
- generated_ids_trimmed = [
254
- out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
255
- ]
256
- output_text = processor.batch_decode(
257
- generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
258
- )
259
- print(output_text)
260
- ```
261
- </details>
262
-
263
- <details>
264
- <summary>Batch inference</summary>
265
-
266
- ```python
267
- # Sample messages for batch inference
268
- messages1 = [
269
- {
270
- "role": "user",
271
- "content": [
272
- {"type": "image", "image": "file:///path/to/image1.jpg"},
273
- {"type": "image", "image": "file:///path/to/image2.jpg"},
274
- {"type": "text", "text": "What are the common elements in these pictures?"},
275
- ],
276
- }
277
- ]
278
- messages2 = [
279
- {"role": "system", "content": "You are a helpful assistant."},
280
- {"role": "user", "content": "Who are you?"},
281
- ]
282
- # Combine messages for batch processing
283
- messages = [messages1, messages1]
284
-
285
- # Preparation for batch inference
286
- texts = [
287
- processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
288
- for msg in messages
289
- ]
290
- image_inputs, video_inputs = process_vision_info(messages)
291
- inputs = processor(
292
- text=texts,
293
- images=image_inputs,
294
- videos=video_inputs,
295
- padding=True,
296
- return_tensors="pt",
297
- )
298
- inputs = inputs.to("cuda")
299
-
300
- # Batch Inference
301
- generated_ids = model.generate(**inputs, max_new_tokens=128)
302
- generated_ids_trimmed = [
303
- out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
304
- ]
305
- output_texts = processor.batch_decode(
306
- generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
307
- )
308
- print(output_texts)
309
- ```
310
- </details>
311
-
312
- ### More Usage Tips
313
-
314
- For input images, we support local files, base64, and URLs. For videos, we currently only support local files.
315
-
316
- ```python
317
- # You can directly insert a local file path, a URL, or a base64-encoded image into the position where you want in the text.
318
- ## Local file path
319
- messages = [
320
- {
321
- "role": "user",
322
- "content": [
323
- {"type": "image", "image": "file:///path/to/your/image.jpg"},
324
- {"type": "text", "text": "Describe this image."},
325
- ],
326
- }
327
- ]
328
- ## Image URL
329
- messages = [
330
- {
331
- "role": "user",
332
- "content": [
333
- {"type": "image", "image": "http://path/to/your/image.jpg"},
334
- {"type": "text", "text": "Describe this image."},
335
- ],
336
- }
337
- ]
338
- ## Base64 encoded image
339
- messages = [
340
- {
341
- "role": "user",
342
- "content": [
343
- {"type": "image", "image": "data:image;base64,/9j/..."},
344
- {"type": "text", "text": "Describe this image."},
345
- ],
346
- }
347
- ]
348
- ```
349
- #### Image Resolution for performance boost
350
-
351
- The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
352
-
353
- ```python
354
- min_pixels = 256 * 28 * 28
355
- max_pixels = 1280 * 28 * 28
356
- processor = AutoProcessor.from_pretrained(
357
- "Qwen/Qwen2-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels
358
- )
359
- ```
360
-
361
- Besides, We provide two methods for fine-grained control over the image size input to the model:
362
-
363
- 1. Define min_pixels and max_pixels: Images will be resized to maintain their aspect ratio within the range of min_pixels and max_pixels.
364
-
365
- 2. Specify exact dimensions: Directly set `resized_height` and `resized_width`. These values will be rounded to the nearest multiple of 28.
366
-
367
- ```python
368
- # min_pixels and max_pixels
369
- messages = [
370
- {
371
- "role": "user",
372
- "content": [
373
- {
374
- "type": "image",
375
- "image": "file:///path/to/your/image.jpg",
376
- "resized_height": 280,
377
- "resized_width": 420,
378
- },
379
- {"type": "text", "text": "Describe this image."},
380
- ],
381
- }
382
- ]
383
- # resized_height and resized_width
384
- messages = [
385
- {
386
- "role": "user",
387
- "content": [
388
- {
389
- "type": "image",
390
- "image": "file:///path/to/your/image.jpg",
391
- "min_pixels": 50176,
392
- "max_pixels": 50176,
393
- },
394
- {"type": "text", "text": "Describe this image."},
395
- ],
396
- }
397
- ]
398
- ```
399
-
400
- ## Limitations
401
-
402
- - This model is starting from Qwen2-VL-7B-Base, so it may have limitations mentioned in https://huggingface.co/Qwen/Qwen2-VL-7B.
403
- - This model is trained only with streaming frame-words paradigm, thus it may be only capable for real-time video commentary.
404
- - The training ASR data is from YouTube CC, which has well-known low quality, so its formatting is not good (e.g. cannot output punctuation).
405
-
406
- These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
407
-
408
- ## Citation
409
-
410
- If you find our work helpful, feel free to give us a cite.
411
-
412
- ```
413
- @inproceedings{livecc,
414
- author = {Joya Chen and Ziyun Zeng and Yiqi Lin and Wei Li and Zejun Ma and Mike Zheng Shou},
415
- title = {LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale},
416
- booktitle = {CVPR},
417
- year = {2025},
418
- }
419
- ```
420
-
421
- ## Acknowledgement
422
-
423
- [Joya Chen](https://chenjoya.github.io/) built the training code and trained the model. The QA evaluation is done by [Joya Chen](https://chenjoya.github.io/), and CC evaluation is done by Ziyun Zeng. Infra is supported by the company.
 
11
  - streaming
12
  ---
13
 
14
+ README is on the way...