runtime error
Exit code: 1. Reason: ββ| 9.09M/9.09M [00:00<00:00, 41.3MB/s] special_tokens_map.json: 0%| | 0.00/73.0 [00:00<?, ?B/s][A special_tokens_map.json: 100%|ββββββββββ| 73.0/73.0 [00:00<00:00, 568kB/s] config.json: 0%| | 0.00/844 [00:00<?, ?B/s][A config.json: 100%|ββββββββββ| 844/844 [00:00<00:00, 6.85MB/s] You don't have a GPU available to load the model, the inference will be slow because of weight unpacking model.safetensors: 0%| | 0.00/1.18G [00:00<?, ?B/s][A model.safetensors: 2%|β | 26.6M/1.18G [00:01<00:44, 26.1MB/s][A model.safetensors: 24%|βββ | 277M/1.18G [00:02<00:05, 157MB/s] [A model.safetensors: 75%|ββββββββ | 889M/1.18G [00:03<00:00, 363MB/s][A model.safetensors: 100%|ββββββββββ| 1.18G/1.18G [00:03<00:00, 333MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 21, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 571, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 282, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4470, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4909, in _load_pretrained_model disk_offload_index, cpu_offload_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 735, in _load_state_dict_into_meta_model file_pointer = safe_open(shard_file, framework="pt", device=tensor_device) safetensors_rust.SafetensorError: device disk is invalid
Container logs:
Fetching error logs...