llama.cpp - GGUF support
#1
by
Doctor-Chad-PhD
- opened
Hi,
Is this model not supported by llama.cpp (gguf format)?
Or is there an error in the implementation?
I'm getting this message when trying to quantize to gguf at the moment:
File "convert_hf_to_gguf.py", line 2553, in set_gguf_parameters
logit_scale = self.hparams["hidden_size"] / self.hparams["dim_model_base"]
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: 'dim_model_base'
Thank you for your time.
Doctor-Chad-PhD
changed discussion title from
GGUF support
to llama.cpp - GGUF support
We have already fixed this error. add dim_model_base = 256. You can update the model and try again
I sincerely apologize for the oversight. When I was working on fixing this bug, I didn't notice your commit. I truly appreciate your contribution and help with our project.
You're absolutely right to point this out - we should have acknowledged your original work properly. We'll improve our review process to prevent this from happening again.
No problem, thank you ❤️
xcjthu
changed discussion status to
closed