vLLM serve error :: AttributeError: 'HCXVisionConfig' object has no attribute 'num_attention_heads'
ERROR 05-07 14:12:09 [core.py:396] EngineCore failed to start.
ERROR 05-07 14:12:09 [core.py:396] Traceback (most recent call last):
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 387, in run_engine_core
ERROR 05-07 14:12:09 [core.py:396] engine_core = EngineCoreProc(*args, **kwargs)
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 329, in init
ERROR 05-07 14:12:09 [core.py:396] super().init(vllm_config, executor_class, log_stats,
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/v1/engine/core.py", line 64, in init
ERROR 05-07 14:12:09 [core.py:396] self.model_executor = executor_class(vllm_config)
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 52, in init
ERROR 05-07 14:12:09 [core.py:396] self._init_executor()
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 46, in _init_executor
ERROR 05-07 14:12:09 [core.py:396] self.collective_rpc("init_device")
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 05-07 14:12:09 [core.py:396] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/utils.py", line 2456, in run_method
ERROR 05-07 14:12:09 [core.py:396] return func(*args, **kwargs)
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 604, in init_device
ERROR 05-07 14:12:09 [core.py:396] self.worker.init_device() # type: ignore
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/v1/worker/gpu_worker.py", line 142, in init_device
ERROR 05-07 14:12:09 [core.py:396] self.model_runner: GPUModelRunner = GPUModelRunner(
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/v1/worker/gpu_model_runner.py", line 118, in init
ERROR 05-07 14:12:09 [core.py:396] self.num_kv_heads = model_config.get_num_kv_heads(parallel_config)
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/config.py", line 1081, in get_num_kv_heads
ERROR 05-07 14:12:09 [core.py:396] total_num_kv_heads = self.get_total_num_kv_heads()
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/vllm/config.py", line 1073, in get_total_num_kv_heads
ERROR 05-07 14:12:09 [core.py:396] return self.hf_text_config.num_attention_heads
ERROR 05-07 14:12:09 [core.py:396] File "/home/horang/venv/llama/lib/python3.10/site-packages/transformers/configuration_utils.py", line 210, in getattribute
ERROR 05-07 14:12:09 [core.py:396] return super().getattribute(key)
ERROR 05-07 14:12:09 [core.py:396] AttributeError: 'HCXVisionConfig' object has no attribute 'num_attention_heads'
Env : docker container
transformers version : 4.51.3
vllm version : 0.8.5.post1
huggingface-hub version : 0.30.2
Hi, thank you for your interest in our model.
We are currently working on supporting vLLM (including the VLM module), and we expect to have support ready by the end of June.