Cannot use the model with vllm
#23
by
NahuelCosta
- opened
Using the command provided here:
# Load and run the model:
vllm serve "liuhaotian/llava-v1.5-7b"
I get this error (both for the 7b and 13b models):
"LlavaLlamaForCausalLM has no vLLM implementation and the Transformers implementation is not compatible with vLLM."
Is there any way to fix it or get it working?
Thanks in advance.