File size: 2,522 Bytes
396314b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

---
tags:
- phosphobot
- gr00t
task_categories:
- robotics                                               
---

# gr00t Model - phospho Training Pipeline


## Error Traceback
We faced an issue while training your model.

```
Traceback (most recent call last):
  File "/root/src/helper.py", line 229, in predict
    trainer.train(timeout_seconds=timeout_seconds)
  File "/root/phosphobot/am/gr00t.py", line 1067, in train
    asyncio.run(
  File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/root/phosphobot/am/gr00t.py", line 967, in run_gr00t_training
    raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/normalization.py", line 217, in forward
return F.layer_norm(
^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/nn/functional.py", line 2900, in layer_norm
return torch.layer_norm(
^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 24.75 MiB is free. Process 64 has 79.22 GiB memory in use. Of the allocated memory 78.46 GiB is allocated by PyTorch, and 266.39 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%|          | 0/1560 [00:09<?, ?it/s]


The current batch size is too large for the GPU.
Please consider lowering it to fit in the memory.
We train on a 80GB A100 GPU.
```


## Training parameters:

- **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 128
- **Training steps**: None

📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)

🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)