runtime error
Exit code: 1. Reason: d with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. tokenizer_config.json: 0%| | 0.00/5.70k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 5.70k/5.70k [00:00<00:00, 38.1MB/s] vocab.json: 0%| | 0.00/2.78M [00:00<?, ?B/s][A vocab.json: 100%|██████████| 2.78M/2.78M [00:00<00:00, 42.5MB/s] merges.txt: 0%| | 0.00/1.67M [00:00<?, ?B/s][A merges.txt: 100%|██████████| 1.67M/1.67M [00:00<00:00, 21.1MB/s] tokenizer.json: 0%| | 0.00/7.03M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 7.03M/7.03M [00:00<00:00, 66.8MB/s] You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0. chat_template.json: 0%| | 0.00/1.05k [00:00<?, ?B/s][A chat_template.json: 100%|██████████| 1.05k/1.05k [00:00<00:00, 10.6MB/s] Model loaded successfully! Launching demo... /home/user/app/app.py:323: UserWarning: You have not specified a value for the `type` parameter. Defaulting to the 'tuples' format for chatbot messages, but this is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style dictionaries with 'role' and 'content' keys. chatbot = gr.Chatbot(label='Qwen2.5-VL', elem_classes='control-height', height=500) Error in main: Blocks.launch() got an unexpected keyword argument 'enable_queue' Traceback (most recent call last): File "/home/user/app/app.py", line 397, in <module> main() File "/home/user/app/app.py", line 382, in main demo.queue(max_size=20).launch( TypeError: Blocks.launch() got an unexpected keyword argument 'enable_queue'
Container logs:
Fetching error logs...