runtime error
Exit code: 1. Reason: 04.safetensors: 4%|▍ | 94.4M/2.16G [00:02<00:52, 39.2MB/s][A model-00004-of-00004.safetensors: 40%|███▉ | 860M/2.16G [00:03<00:04, 321MB/s] [A model-00004-of-00004.safetensors: 57%|█████▋ | 1.24G/2.16G [00:05<00:02, 309MB/s][A model-00004-of-00004.safetensors: 78%|███████▊ | 1.69G/2.16G [00:06<00:01, 353MB/s][A model-00004-of-00004.safetensors: 100%|█████████▉| 2.16G/2.16G [00:06<00:00, 324MB/s] Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s][ATraceback (most recent call last): File "/home/user/app/app.py", line 180, in <module> model = LlavaForConditionalGeneration.from_pretrained(MODEL_PATH, torch_dtype="bfloat16", device_map=0) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 279, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4400, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4793, in _load_pretrained_model caching_allocator_warmup(model_to_load, expanded_device_map, factor=2 if hf_quantizer is None else 4) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5795, in caching_allocator_warmup device_memory = torch.cuda.mem_get_info(index)[0] File "/usr/local/lib/python3.10/site-packages/torch/cuda/memory.py", line 836, in mem_get_info return torch.cuda.cudart().cudaMemGetInfo(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 449, in cudart _lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 372, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Container logs:
Fetching error logs...