runtime error
0:03, 54.3MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 96%|█████████▋| 4.47G/4.63G [01:35<00:03, 46.7MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 97%|█████████▋| 4.49G/4.63G [01:35<00:02, 54.2MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 97%|█████████▋| 4.50G/4.63G [01:36<00:03, 44.1MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.52G/4.63G [01:36<00:02, 47.3MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.53G/4.63G [01:36<00:02, 43.0MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.55G/4.63G [01:37<00:01, 45.2MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 98%|█████████▊| 4.56G/4.63G [01:37<00:01, 39.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 99%|█████████▉| 4.58G/4.63G [01:38<00:01, 49.7MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 99%|█████████▉| 4.59G/4.63G [01:38<00:00, 46.6MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|█████████▉| 4.61G/4.63G [01:38<00:00, 46.9MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|█████████▉| 4.62G/4.63G [01:38<00:00, 46.3MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|██████████| 4.63G/4.63G [01:39<00:00, 49.8MB/s] Downloading (…)chat.ggmlv3.q5_0.bin: 100%|██████████| 4.63G/4.63G [01:39<00:00, 46.8MB/s] gguf_init_from_file: invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--TheBloke--Llama-2-7B-Chat-GGML/snapshots/b616819cd4777514e3a2d9b8be69824aca8f5daf/llama-2-7b-chat.ggmlv3.q5_0.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> llm = Llama( File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 323, in __init__ assert self.model is not None AssertionError
Container logs:
Fetching error logs...