runtime error
Exit code: 1. Reason: ownloading shards: 67%|βββββββ | 4/6 [00:09<00:04, 2.22s/it][A Downloading shards: 83%|βββββββββ | 5/6 [00:11<00:02, 2.18s/it][A Downloading shards: 100%|ββββββββββ| 6/6 [00:13<00:00, 2.16s/it][A Downloading shards: 100%|ββββββββββ| 6/6 [00:13<00:00, 2.25s/it] Loading checkpoint shards: 0%| | 0/6 [00:00<?, ?it/s][A Loading checkpoint shards: 33%|ββββ | 2/6 [00:01<00:02, 1.39it/s][A Loading checkpoint shards: 67%|βββββββ | 4/6 [00:02<00:01, 1.56it/s][A Loading checkpoint shards: 100%|ββββββββββ| 6/6 [00:03<00:00, 1.72it/s][A Loading checkpoint shards: 100%|ββββββββββ| 6/6 [00:03<00:00, 1.65it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> model = PaliGemmaForConditionalGeneration.from_pretrained("gokaygokay/sd3-long-captioner").to("cuda").eval() File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2901, in to return super().to(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1174, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 780, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 805, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in convert return t.to( File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 314, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
Container logs:
Fetching error logs...