gr00t Model - phospho Training Pipeline
Error Traceback
We faced an issue while training your model.
Traceback (most recent call last):
File "/root/src/helper.py", line 165, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1148, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 998, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 510, in forward
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 278, in apply_rotary_pos_emb
k_embed = (k * cos) + (rotate_half(k) * sin)
~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 150.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 146.75 MiB is free. Process 16 has 79.10 GiB memory in use. Of the allocated memory 77.86 GiB is allocated by PyTorch, and 764.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/750 [00:06<?, ?it/s]
The current batch size is too large for the GPU.
Please consider lowering it to fit in the memory.
We train on a 80GB A100 GPU.
Training parameters:
- Dataset: tatung/hybrid_gripper_paper_to_blackbox
- Wandb run URL: None
- Epochs: 10
- Batch size: 107
- Training steps: None
๐ Get Started: docs.phospho.ai
๐ค Get your robot: robots.phospho.ai
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support