Synthia-7B-v1.3.q4_k_s.gguf doesn't work
#2
by
vasilee
- opened
using llama_cpp_python==0.1.83 with Synthia-7B-v1.3.q4_k_s.gguf leads to this errorAttributeError: 'builtin_function_or_method' object has no attribute 'encode'
it works fine with mistral-7b-v0.1.Q4_K_S.gguf by TheBloke
code
from llama_cpp import Llama
llm = Llama(model_path="./models/Synthia-7B-v1.3.q4_k_s.gguf",
verbose=True, n_ctx=4096)
input = "hi, how are you?"
output = llm(input, temperature=0, top_k=5)
print(output)
I'm clueless, it work locally.
Make sure your file isn't corrupted or try another quant to be sure, thank you!
vasilee
changed discussion status to
closed