runtime error

Exit code: 1. Reason: : 75%|███████▌ | 3/4 [00:32<00:10, 10.62s/it] model-00004-of-00004.safetensors: 0%| | 0.00/3.73G [00:00<?, ?B/s] model-00004-of-00004.safetensors: 2%|▏ | 62.9M/3.73G [00:01<01:16, 47.8MB/s] model-00004-of-00004.safetensors: 6%|▌ | 210M/3.73G [00:02<00:35, 97.9MB/s]  model-00004-of-00004.safetensors: 20%|█▉ | 744M/3.73G [00:03<00:10, 282MB/s]  model-00004-of-00004.safetensors: 35%|███▍ | 1.30G/3.73G [00:04<00:06, 379MB/s] model-00004-of-00004.safetensors: 47%|████▋ | 1.75G/3.73G [00:05<00:04, 402MB/s] model-00004-of-00004.safetensors: 63%|██████▎ | 2.34G/3.73G [00:06<00:03, 457MB/s] model-00004-of-00004.safetensors: 78%|███████▊ | 2.90G/3.73G [00:07<00:01, 491MB/s] model-00004-of-00004.safetensors: 97%|█████████▋| 3.63G/3.73G [00:08<00:00, 566MB/s] model-00004-of-00004.safetensors: 100%|█████████▉| 3.73G/3.73G [00:08<00:00, 431MB/s] Downloading shards: 100%|██████████| 4/4 [00:41<00:00, 9.95s/it] Downloading shards: 100%|██████████| 4/4 [00:41<00:00, 10.34s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 19, in <module> model = AutoModelForCausalLM.from_pretrained(model_name, File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4097, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/user/.cache/huggingface/modules/transformers_modules/AIDC-AI/Ovis2-8B/d0e09dbe6ce98dc788491976d3c69a539012d44f/modeling_ovis.py", line 293, in __init__ version.parse(importlib.metadata.version("flash_attn")) >= version.parse("2.6.3")), \ AssertionError: Using `flash_attention_2` requires having `flash_attn>=2.6.3` installed.

Container logs:

Fetching error logs...