runtime error

Exit code: 1. Reason: etv2w24s4ep4.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 108M/108M [00:00<00:00, 189MB/s] s2Gv2ProPlus.pth: 0%| | 0.00/200M [00:00<?, ?B/s] s2Gv2ProPlus.pth: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 200M/200M [00:00<00:00, 300MB/s] SoVITS_weights_v4/zoengjyutgaai_e2_s534_(…): 0%| | 0.00/75.6M [00:00<?, ?B/s] SoVITS_weights_v4/zoengjyutgaai_e2_s534_(…): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 75.6M/75.6M [00:01<00:00, 61.4MB/s] SoVITS_weights_v4/zoengjyutgaai_e2_s534_(…): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 75.6M/75.6M [00:01<00:00, 61.4MB/s] GPT_weights_v4/zoengjyutgaai-e15.ckpt: 0%| | 0.00/155M [00:00<?, ?B/s] GPT_weights_v4/zoengjyutgaai-e15.ckpt: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 88.2M/155M [00:01<00:01, 59.9MB/s] GPT_weights_v4/zoengjyutgaai-e15.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 155M/155M [00:01<00:00, 87.6MB/s] [nltk_data] Downloading package averaged_perceptron_tagger_eng to [nltk_data] /home/user/nltk_data... [nltk_data] Unzipping taggers/averaged_perceptron_tagger_eng.zip. The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] Traceback (most recent call last): File "/home/user/app/inference_webui.py", line 252, in <module> change_sovits_weights("pretrained_models/SoVITS_weights_v4/zoengjyutgaai_e2_s534_l32.pth") File "/home/user/app/inference_webui.py", line 202, in change_sovits_weights dict_s2 = torch.load(sovits_path, map_location="cpu") File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/serialization.py", line 1384, in load return _legacy_load( File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/torch/serialization.py", line 1628, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: unpickling stack underflow

Container logs:

Fetching error logs...