Unsupported: hasattr SymNodeVariable exception
I am getting the following exception in generate:
/usr/local/lib/python3.11/dist-packages/torch/_dynamo/exc.py in unimplemented(msg, from_exc, case_name)
295 if from_exc is not _NOTHING:
296 raise Unsupported(msg, case_name=case_name) from from_exc
--> 297 raise Unsupported(msg, case_name=case_name)
298
299
Unsupported: hasattr SymNodeVariable to
from user code:
File "/usr/local/lib/python3.11/dist-packages/accelerate/hooks.py", line 171, in new_forward
args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs)
File "/usr/local/lib/python3.11/dist-packages/accelerate/hooks.py", line 370, in pre_forward
return send_to_device(args, self.execution_device), send_to_device(
File "/usr/local/lib/python3.11/dist-packages/accelerate/utils/operations.py", line 183, in send_to_device
{
File "/usr/local/lib/python3.11/dist-packages/accelerate/utils/operations.py", line 184, in
k: t if k in skip_keys else send_to_device(t, device, non_blocking=non_blocking, skip_keys=skip_keys)
File "/usr/local/lib/python3.11/dist-packages/accelerate/utils/operations.py", line 148, in send_to_device
if is_torch_tensor(tensor) or hasattr(tensor, "to"):
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
Would you please suggest the cause for it?
Hi @tatyanavidrevich ,
The error is likely due to a version mismatch between torch and accelerate, causing unsupported function calls. Try upgrading both packages and check again. If the issue persists, please share your code and hardware specs.
I tested the code on a NVIDIA Tesla A100 (single GPU) with 8-bit precision, and it ran successfully without any errors. Could you please refer to this gist file.
Thank you.