runtime error

Exit code: 1. Reason: /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:943: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( tokenizer_config.json: 0%| | 0.00/1.46k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.46k/1.46k [00:00<00:00, 5.45MB/s] tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 64.3MB/s] tokenizer.json: 0%| | 0.00/1.80M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 36.6MB/s] special_tokens_map.json: 0%| | 0.00/72.0 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 72.0/72.0 [00:00<00:00, 219kB/s] config.json: 0%| | 0.00/962 [00:00<?, ?B/s] config.json: 100%|██████████| 962/962 [00:00<00:00, 4.43MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3160, in from_pretrained hf_quantizer = AutoHfQuantizer.from_config(config.quantization_config, pre_quantized=pre_quantized) File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 124, in from_config return target_cls(quantization_config, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_gptq.py", line 47, in __init__ from optimum.gptq import GPTQQuantizer ModuleNotFoundError: No module named 'optimum'

Container logs:

Fetching error logs...