runtime error

Exit code: 1. Reason: |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2.47G/2.47G [00:06<00:00, 395MB/s] model.safetensors: 0%| | 0.00/2.47G [00:00<?, ?B/s] model.safetensors: 3%|β–Ž | 62.9M/2.47G [00:01<01:06, 36.0MB/s] model.safetensors: 12%|β–ˆβ– | 294M/2.47G [00:02<00:17, 123MB/s]  model.safetensors: 45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 1.12G/2.47G [00:03<00:03, 397MB/s] model.safetensors: 64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 1.58G/2.47G [00:04<00:02, 391MB/s] model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 2.47G/2.47G [00:06<00:00, 402MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 17, in <module> aura_sr = AuraSR.from_pretrained("fal/AuraSR-v2") File "/usr/local/lib/python3.10/site-packages/aura_sr.py", line 810, in from_pretrained model = cls(config) File "/usr/local/lib/python3.10/site-packages/aura_sr.py", line 773, in __init__ self.upsampler = UnetUpsampler(**config).to(device) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1152, in to return self._apply(convert) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 802, in _apply module._apply(fn) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 825, in _apply param_applied = fn(param) File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1150, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 302, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...