runtime error

Exit code: 1. Reason: ========== == CUDA == ========== CUDA Version 12.1.1 Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available. Use the NVIDIA Container Toolkit to start this container with GPU support; see https://docs.nvidia.com/datacenter/cloud-native/ . font_manager.py :1584 2024-08-18 03:47:12,431 generated new fontManager /content/Lumina-T2X/lumina_next_compositional_generation/models/components.py:9: UserWarning: Cannot import apex RMSNorm, switch to vanilla implementation warnings.warn("Cannot import apex RMSNorm, switch to vanilla implementation") Traceback (most recent call last): File "/content/Lumina-T2X/lumina_next_compositional_generation/worker_runpod.py", line 41, in <module> dist.init_process_group("nccl") File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 86, in wrapper func_return = func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1184, in init_process_group default_pg, _ = _new_process_group_helper( File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 1339, in _new_process_group_helper backend_class = ProcessGroupNCCL( ValueError: ProcessGroupNCCL is only supported with GPUs, no GPUs found!

Container logs:

Fetching error logs...