issue with vLLM inference

#7
by rohitg - opened

I keep running into this issue when trying to do inference using vLLM with pytorch 2.7 cu118

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/vllm/compilation/cuda_piecewise_backend.py", line 111, in call

ERROR 07-14 16:50:13 [core.py:515]     return self.compiled_graph_for_general_shape(*args)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn

ERROR 07-14 16:50:13 [core.py:515]     return fn(*args, **kwargs)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward

ERROR 07-14 16:50:13 [core.py:515]     return compiled_fn(full_args)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper

ERROR 07-14 16:50:13 [core.py:515]     all_outs = call_func_at_runtime_with_args(

ERROR 07-14 16:50:13 [core.py:515]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args

ERROR 07-14 16:50:13 [core.py:515]     out = normalize_as_list(f(args))

ERROR 07-14 16:50:13 [core.py:515]                             ^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 689, in inner_fn

ERROR 07-14 16:50:13 [core.py:515]     outs = compiled_fn(args)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper

ERROR 07-14 16:50:13 [core.py:515]     return compiled_fn(runtime_args)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_inductor/output_code.py", line 460, in call

ERROR 07-14 16:50:13 [core.py:515]     return self.current_callable(inputs)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_inductor/utils.py", line 2404, in run

ERROR 07-14 16:50:13 [core.py:515]     return model(new_inputs)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/home/ro308090/.cache/vllm/torch_compile_cache/1316656fd2/rank_0_0/inductor_cache/xw/cxww7bk4vsxx3nbg5o7zwbfp3lgkbpglqtnlg47ymw7kxtw3cklm.py", line 890, in call

ERROR 07-14 16:50:13 [core.py:515]     torch.ops.vllm.inplace_fused_experts.default(reinterpret_tensor(buf17, (s0, 2048), (2048, 1), 0), arg9_1, arg10_1, buf18, buf19, 'silu', False, False, False, False, False, False, 64)

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_ops.py", line 756, in call

ERROR 07-14 16:50:13 [core.py:515]     return self._op(*args, **kwargs)

ERROR 07-14 16:50:13 [core.py:515]            ^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 1018, in inplace_fused_experts

ERROR 07-14 16:50:13 [core.py:515]     fused_experts_impl(hidden_states, w1, w2, topk_weights, topk_ids, True,

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 1340, in fused_experts_impl

ERROR 07-14 16:50:13 [core.py:515]     moe_align_block_size(curr_topk_ids, config['BLOCK_SIZE_M'],

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/moe_align_block_size.py", line 238, in moe_align_block_size

ERROR 07-14 16:50:13 [core.py:515]     ops.moe_align_block_size(topk_ids, num_experts, block_size, sorted_ids,

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/vllm/_custom_ops.py", line 1519, in moe_align_block_size

ERROR 07-14 16:50:13 [core.py:515]     torch.ops._moe_C.moe_align_block_size(topk_ids, num_experts, block_size,

ERROR 07-14 16:50:13 [core.py:515]     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

ERROR 07-14 16:50:13 [core.py:515]   File "/lustre/fs1/groups/mshah/reasoning_rohit/reasoning/tlqa/lib/python3.12/site-packages/torch/_ops.py", line 1267, in getattr

ERROR 07-14 16:50:13 [core.py:515]     raise AttributeError(

ERROR 07-14 16:50:13 [core.py:515] AttributeError: '_OpNamespace' '_moe_C' object has no attribute 'moe_align_block_size'

Moonshot AI org

can you give a full repro and report your environment? you can post a bug report issue in vllm.

Sign up or log in to comment