Error in textgenwebui

#1
by Nark103 - opened

Doesn't seems to work wit the later textgenwebui pull:

Traceback (most recent call last):

File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\ui_model_menu.py", line 244, in load_model_wrapper

shared.model, shared.tokenizer = load_model(selected_model, loader)

                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\models.py", line 93, in load_model

output = load_func_maploader

     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\models.py", line 271, in llamacpp_loader

model, tokenizer = LlamaCppModel.from_pretrained(model_file)

               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\llamacpp_model.py", line 103, in from_pretrained

result.model = Llama(**params)

           ^^^^^^^^^^^^^^^

File "G:\SD\text-generation-webui-snapshot-2024-02-11\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama.py", line 338, in init

self._model = _LlamaModel(

          ^^^^^^^^^^^^

File "G:\SD\text-generation-webui-snapshot-2024-02-11\installer_files\env\Lib\site-packages\llama_cpp_cuda_internals.py", line 57, in init

raise ValueError(f"Failed to load model from file: {path_model}")

ValueError: Failed to load model from file: models\DeepSeek-Coder-V2-Instruct.i1-Q4_K_S.gguf

If your timestamps are any indication, you are using absolutely antique software. Are you sure you are using a current release of llama.cpp? In any case, "Failed to load model" is not a useful error message, you should report this upstream and get a sensible error message.

mradermacher changed discussion status to closed

The foldername is old, but i made an update before opening this issue. What do you mean by 'report this upstream'?

The console give:
Traceback (most recent call last):
File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\ui_model_menu.py", line 244, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\models.py", line 93, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\models.py", line 271, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\llamacpp_model.py", line 103, in from_pretrained
result.model = Llama(**params)
^^^^^^^^^^^^^^^
File "G:\SD\text-generation-webui-snapshot-2024-02-11\installer_files\env\Lib\site-packages\llama_cpp_cuda\llama.py", line 338, in init
self._model = _LlamaModel(
^^^^^^^^^^^^
File "G:\SD\text-generation-webui-snapshot-2024-02-11\installer_files\env\Lib\site-packages\llama_cpp_cuda_internals.py", line 57, in init
raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: models\DeepSeek-Coder-V2-Instruct.i1-Q4_K_S.gguf

Exception ignored in: <function LlamaCppModel.__del__ at 0x0000020937F156C0>
Traceback (most recent call last):
File "G:\SD\text-generation-webui-snapshot-2024-02-11\modules\llamacpp_model.py", line 58, in del
del self.model
^^^^^^^^^^
AttributeError: 'LlamaCppModel' object has no attribute 'model'

AttributeError: 'LlamaCppModel' object has no attribute 'model - that's a programing bug in the code. It might be caused by the gguf, but without any usable error message, and given that it works with llama.cpp, it's very unlikely that it is a problem with the gguf itself.

Sign up or log in to comment