runtime error

Exit code: 1. Reason: model_loader: - kv 21: tokenizer.ggml.pre str = default llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,262144] = ["<pad>", "<eos>", "<bos>", "<unk>", ... llama_model_loader: - kv 23: tokenizer.ggml.scores arr[f32,262144] = [-1000.000000, -1000.000000, -1000.00... llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,262144] = [3, 3, 3, 3, 3, 4, 3, 3, 3, 3, 3, 3, ... llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 2 llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 106 llama_model_loader: - kv 27: tokenizer.ggml.unknown_token_id u32 = 3 llama_model_loader: - kv 28: tokenizer.ggml.padding_token_id u32 = 0 llama_model_loader: - kv 29: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 30: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 31: tokenizer.ggml.add_space_prefix bool = false llama_model_loader: - type f32: 157 tensors llama_model_loader: - type q8_0: 183 tensors error loading model: unknown model architecture: 'gemma3' llama_load_model_from_file: failed to load model AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> llm = Llama( File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 962, in __init__ self._n_vocab = self.n_vocab() File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 2266, in n_vocab return self._model.n_vocab() File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 251, in n_vocab assert self.model is not None AssertionError

Container logs:

Fetching error logs...