runtime error

Exit code: 1. Reason: Downloading shards: 0%| | 0/4 [00:00<?, ?it/s] Downloading shards: 25%|██▌ | 1/4 [00:10<00:30, 10.24s/it] Downloading shards: 50%|█████ | 2/4 [00:21<00:21, 10.69s/it] Downloading shards: 75%|███████▌ | 3/4 [00:31<00:10, 10.38s/it] Downloading shards: 100%|██████████| 4/4 [00:32<00:00, 6.92s/it] Downloading shards: 100%|██████████| 4/4 [00:32<00:00, 8.22s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 8, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4105, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1525, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1659, in _check_and_enable_flash_attn_2 raise ImportError(f"{preface} the package flash_attn seems to be not installed. {install_message}") ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: the package flash_attn seems to be not installed. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.

Container logs:

Fetching error logs...