runtime error
94%|ββββββββββ| 4.22G/4.47G [00:20<00:02, 116MB/s] 95%|ββββββββββ| 4.24G/4.47G [00:20<00:02, 116MB/s] 95%|ββββββββββ| 4.25G/4.47G [00:20<00:02, 115MB/s] 95%|ββββββββββ| 4.26G/4.47G [00:20<00:01, 116MB/s] 95%|ββββββββββ| 4.27G/4.47G [00:20<00:01, 115MB/s] 96%|ββββββββββ| 4.28G/4.47G [00:20<00:02, 85.1MB/s] 96%|ββββββββββ| 4.31G/4.47G [00:21<00:01, 126MB/s] 97%|ββββββββββ| 4.32G/4.47G [00:21<00:01, 126MB/s] 97%|ββββββββββ| 4.33G/4.47G [00:21<00:01, 121MB/s] 97%|ββββββββββ| 4.35G/4.47G [00:21<00:01, 121MB/s] 97%|ββββββββββ| 4.36G/4.47G [00:21<00:01, 119MB/s] 98%|ββββββββββ| 4.37G/4.47G [00:21<00:00, 117MB/s] 98%|ββββββββββ| 4.38G/4.47G [00:21<00:00, 118MB/s] 98%|ββββββββββ| 4.39G/4.47G [00:21<00:00, 116MB/s] 98%|ββββββββββ| 4.40G/4.47G [00:21<00:00, 116MB/s] 99%|ββββββββββ| 4.42G/4.47G [00:22<00:00, 116MB/s] 99%|ββββββββββ| 4.43G/4.47G [00:22<00:00, 116MB/s] 99%|ββββββββββ| 4.44G/4.47G [00:22<00:00, 116MB/s] 99%|ββββββββββ| 4.45G/4.47G [00:22<00:00, 116MB/s] 100%|ββββββββββ| 4.46G/4.47G [00:22<00:00, 116MB/s] 100%|ββββββββββ| 4.47G/4.47G [00:22<00:00, 212MB/s] Visual encoder initialized. Initializing language decoder from openllmplayground/vicuna_7b_v0 ... Traceback (most recent call last): File "/home/user/app/app_case.py", line 23, in <module> model = OpenLLAMAPEFTModel(**args) File "/home/user/app/model/openllama.py", line 105, in __init__ self.llama_model = LlamaForCausalLM.from_pretrained(vicuna_ckpt_path, use_auth_token=os.environ['API_TOKEN']) File "/usr/local/lib/python3.10/os.py", line 680, in __getitem__ raise KeyError(key) from None KeyError: 'API_TOKEN'
Container logs:
Fetching error logs...