vLLM error
#2
by
EmilPi
- opened
I tried to run
vllm serve ~/models3/fp16/ChatDOC/OCRFlux-3B --gpu-memory-utilization 0.8 --max-model-len 8192
and got
File "/home/ai/3rdparty/vllm_dir/.venv/lib/python3.12/site-packages/vllm/transformers_utils/processor.py", line 72, in get_processor
processor = processor_factory.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ai/3rdparty/vllm_dir/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1304, in from_pretrained
return cls.from_args_and_dict(args, processor_dict, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ai/3rdparty/vllm_dir/.venv/lib/python3.12/site-packages/transformers/processing_utils.py", line 1105, in from_args_and_dict
processor = cls(*args, **valid_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Qwen2_5_VLProcessor.__init__() got multiple values for argument 'image_processor'
Please use vllm==0.7.3
and try again.
can be used with the open ai vllm api? if yes can you provide a curl example of request to the model?