GGUF quantization error

#1
by Doctor-Chad-PhD - opened

I'm getting this error when trying to quantize this model to gguf with llama.cpp:

AssertionError: HunYuan dynamic RoPE scaling assumptions changed, please update the logic or context length manually

Is there any way to fix this?

Thank you

it's odd the chimera model can be gguffed

Tencent org

We have renew the "max_position_embeddings" in config.json, could you please try again?

We have renew the "max_position_embeddings" in config.json, could you please try again?

Yes, it can be converted to GGUF and quantized with the change. Thank you.
sampleworking.png

@hhoh thank you it works for me too now.

We have renew the "max_position_embeddings" in config.json, could you please try again?

可以上传官方量化文件吗?

I've tried many time, and converted to guff and used in Ollama, but output is gibberish and sometime repead one work many time, not sure where im wrong

Sign up or log in to comment