act Model - phospho Training Pipeline

Error Traceback

We faced an issue while training your model.

Training process failed with exit code 1:
return forward_call(*args, **kwargs)
File "/lerobot/lerobot/common/policies/act/modeling_act.py", line 583, in forward
x = self.linear2(self.dropout(self.activation(self.linear1(x))))
File "/opt/conda/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 125, in forward
return F.linear(input, self.weight, self.bias)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 736.00 MiB. GPU 0 has a total capacity of 22.07 GiB of which 626.25 MiB is free. Process 17 has 21.45 GiB memory in use. Of the allocated memory 20.55 GiB is allocated by PyTorch, and 640.91 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Training parameters:

๐Ÿ“– Get Started: docs.phospho.ai

๐Ÿค– Get your robot: robots.phospho.ai

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support