GGUF quantization error
#1
by
Doctor-Chad-PhD
- opened
I'm getting this error when trying to quantize this model to gguf with llama.cpp:
AssertionError: HunYuan dynamic RoPE scaling assumptions changed, please update the logic or context length manually
Is there any way to fix this?
Thank you
it's odd the chimera model can be gguffed
We have renew the "max_position_embeddings" in config.json, could you please try again?
We have renew the "max_position_embeddings" in config.json, could you please try again?
可以上传官方量化文件吗?
I've tried many time, and converted to guff and used in Ollama, but output is gibberish and sometime repead one work many time, not sure where im wrong