gguf fail to be loaded on ollama and LM Studio
#21
by
JackeyBee
- opened
I'm seeing something similar with loading via llama-cpp-python
. With another "'Llama' object has no attribute '_lora_adapter'" message. This has happened only with the latest llama-cpp-python release.
See https://github.com/ggerganov/llama.cpp/pull/8627
Updated gguf are required since this recent change.