Unable to merge the GGUF

#4
by aaron-newsome - opened

Using the command: llama-gguf-split --merge gpt-oss-120b-UD-Q8_K_XL-00001-of-00002.gguf gpt-oss-120b-UD-Q8_K_XL.gguf

gguf_merge: reading metadata gpt-oss-120b-UD-Q8_K_XL-00001-of-00002.gguf ...gguf_init_from_file_impl: tensor 'blk.0.ffn_down_exps.weight' has invalid ggml type 39 (NONE

version:

llama-gguf-split --version 
version: 5992 (ce111d39)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
Unsloth AI org

Using the command: llama-gguf-split --merge gpt-oss-120b-UD-Q8_K_XL-00001-of-00002.gguf gpt-oss-120b-UD-Q8_K_XL.gguf

gguf_merge: reading metadata gpt-oss-120b-UD-Q8_K_XL-00001-of-00002.gguf ...gguf_init_from_file_impl: tensor 'blk.0.ffn_down_exps.weight' has invalid ggml type 39 (NONE

version:

llama-gguf-split --version 
version: 5992 (ce111d39)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu

Does the 20B model work? Did you recompile llama.cpp and update it?

I didn't recompile llama.cpp, that seemed like a lot of work. I used the model in the ollama repository instead.

ollama probably doesnt include the related PRs yet, your only option may be to compile llama.cpp with latest commits yourself

Sign up or log in to comment