multi-gpu support?
#2
by
bdambrosio
- opened
I'd post this on the AutoGPTQ mixtral-fix site but there isn't an 'issues' tab
@TheBloke
- has anyone gotten multi-gpu to work? Single gpu works fine
2X4090 I get: (device_map='auto' or no device_map statement, same result)
.
.
.
return self._call_impl(*args, **kwargs)
File "/home/bruce/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/bruce/.local/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py", line 726, in forward
idx, top_x = torch.where(expert_mask[expert_idx])
RuntimeError: CUDA error: device-side assert triggered