runtime error

Exit code: 1. Reason: .73k/1.73k [00:00<00:00, 12.3MB/s] configuration_deepseek.py: 0%| | 0.00/10.6k [00:00<?, ?B/s] configuration_deepseek.py: 100%|██████████| 10.6k/10.6k [00:00<00:00, 56.8MB/s] A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-V3: - configuration_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. modeling_deepseek.py: 0%| | 0.00/75.8k [00:00<?, ?B/s] modeling_deepseek.py: 100%|██████████| 75.8k/75.8k [00:00<00:00, 164MB/s] A new version of the following files was downloaded from https://huggingface.co/deepseek-ai/DeepSeek-V3: - modeling_deepseek.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True) # ADD trust_remote_code=True File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3605, in from_pretrained config.quantization_config = AutoHfQuantizer.merge_quantization_configs( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 181, in merge_quantization_configs quantization_config = AutoQuantizationConfig.from_dict(quantization_config) File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/auto.py", line 105, in from_dict raise ValueError( ValueError: Unknown quantization type, got fp8 - supported types are: ['awq', 'bitsandbytes_4bit', 'bitsandbytes_8bit', 'gptq', 'aqlm', 'quanto', 'eetq', 'higgs', 'hqq', 'compressed-tensors', 'fbgemm_fp8', 'torchao', 'bitnet', 'vptq']

Container logs:

Fetching error logs...