Missing pre-tokenizer type
#1
by
failspy
- opened
Will need to redo this model's GGUF here
https://github.com/ggerganov/llama.cpp/issues/7021
The GGUF will work but probably not as nicely
fp16 converted to not have this issue, other quants will be updated later
q4 updated
failspy
changed discussion status to
closed