runtime error
Exit code: 1. Reason: ��██████▋| 4.63G/4.76G [00:13<00:00, 450MB/s][A pytorch_model-00001-of-00002.bin: 100%|██████████| 4.76G/4.76G [00:13<00:00, 344MB/s] pytorch_model-00002-of-00002.bin: 0%| | 0.00/1.05G [00:00<?, ?B/s][A pytorch_model-00002-of-00002.bin: 2%|▏ | 16.8M/1.05G [00:01<01:08, 15.2MB/s][A pytorch_model-00002-of-00002.bin: 36%|███▌ | 380M/1.05G [00:02<00:03, 186MB/s] [A pytorch_model-00002-of-00002.bin: 94%|█████████▎| 984M/1.05G [00:03<00:00, 340MB/s][A pytorch_model-00002-of-00002.bin: 100%|██████████| 1.05G/1.05G [00:03<00:00, 294MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 9, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 311, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4839, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5260, in _load_pretrained_model caching_allocator_warmup(model_to_load, expanded_device_map, hf_quantizer) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 5860, in caching_allocator_warmup index = device.index if device.index is not None else torch.cuda.current_device() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 1026, in current_device _lazy_init() File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 372, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...