LegrandFrederic commited on
Commit
96f199a
·
verified ·
1 Parent(s): cf48e57

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ tags:
4
+ - phosphobot
5
+ - gr00t
6
+ task_categories:
7
+ - robotics
8
+ ---
9
+
10
+ # gr00t Model - phospho Training Pipeline
11
+
12
+
13
+ ## Error Traceback
14
+ We faced an issue while training your model.
15
+
16
+ ```
17
+ Traceback (most recent call last):
18
+ File "/root/src/helper.py", line 229, in predict
19
+ trainer.train(timeout_seconds=timeout_seconds)
20
+ File "/root/phosphobot/am/gr00t.py", line 1067, in train
21
+ asyncio.run(
22
+ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
23
+ return runner.run(main)
24
+ ^^^^^^^^^^^^^^^^
25
+ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
26
+ return self._loop.run_until_complete(task)
27
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
28
+ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
29
+ return future.result()
30
+ ^^^^^^^^^^^^^^^
31
+ File "/root/phosphobot/am/gr00t.py", line 967, in run_gr00t_training
32
+ raise RuntimeError(error_msg)
33
+ RuntimeError: Training process failed with exit code 1:
34
+ return forward_call(*args, **kwargs)
35
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
36
+ File "/opt/conda/lib/python3.11/site-packages/torch/nn/modules/normalization.py", line 217, in forward
37
+ return F.layer_norm(
38
+ ^^^^^^^^^^^^^
39
+ File "/opt/conda/lib/python3.11/site-packages/torch/nn/functional.py", line 2900, in layer_norm
40
+ return torch.layer_norm(
41
+ ^^^^^^^^^^^^^^^^^
42
+ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 288.00 MiB. GPU 0 has a total capacity of 79.25 GiB of which 24.75 MiB is free. Process 548102 has 79.22 GiB memory in use. Of the allocated memory 78.46 GiB is allocated by PyTorch, and 266.39 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
43
+ 0%| | 0/2340 [00:06<?, ?it/s]
44
+
45
+
46
+ The current batch size is too large for the GPU.
47
+ Please consider lowering it to fit in the memory.
48
+ We train on a 80GB A100 GPU.
49
+ ```
50
+
51
+
52
+ ## Training parameters:
53
+
54
+ - **Dataset**: [PAphospho/tictactoe-A1-orange](https://huggingface.co/datasets/PAphospho/tictactoe-A1-orange)
55
+ - **Wandb run URL**: None
56
+ - **Epochs**: 15
57
+ - **Batch size**: 128
58
+ - **Training steps**: None
59
+
60
+ 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
61
+
62
+ 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)