runtime error
Exit code: 1. Reason: ava_llama3_fp8_scaled.safetensors boreal-hl-v1.safetensors: 0%| | 0.00/181M [00:00<?, ?B/s][A boreal-hl-v1.safetensors: 100%|█████████▉| 181M/181M [00:00<00:00, 193MB/s] Model moved to: models/loras/boreal-hl-v1.safetensors INFO:root: Prestartup times for custom nodes: Prestartup times for custom nodes: INFO:root: 0.0 seconds: /home/user/app/custom_nodes/rgthree-comfy 0.0 seconds: /home/user/app/custom_nodes/rgthree-comfy INFO:root: 0.0 seconds: /home/user/app/custom_nodes/ComfyUI-Easy-Use 0.0 seconds: /home/user/app/custom_nodes/ComfyUI-Easy-Use INFO:root: INFO:root:Checkpoint files will always be loaded safely. Checkpoint files will always be loaded safely. Traceback (most recent call last): File "/home/user/app/app.py", line 205, in <module> add_extra_model_paths() File "/home/user/app/app.py", line 189, in add_extra_model_paths from main import load_extra_path_config File "/home/user/app/main.py", line 136, in <module> import execution File "/home/user/app/execution.py", line 13, in <module> import nodes File "/home/user/app/nodes.py", line 22, in <module> import comfy.diffusers_load File "/home/user/app/comfy/diffusers_load.py", line 3, in <module> import comfy.sd File "/home/user/app/comfy/sd.py", line 6, in <module> from comfy import model_management File "/home/user/app/comfy/model_management.py", line 166, in <module> total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) File "/home/user/app/comfy/model_management.py", line 129, in get_torch_device return torch.device(torch.cuda.current_device()) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 971, in current_device _lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...