ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'

#1
by kowal66b - opened

When trying to convert this model into GGUF using llama.cpp I endup with issues:
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'.

The whole error can be seen here

INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00002.safetensors'
INFO:hf-to-gguf:token_embd.weight,           torch.float32 --> F16, shape = {4096, 128256}
INFO:hf-to-gguf:blk.0.attn_norm.weight,      torch.float32 --> F32, shape = {4096}
INFO:hf-to-gguf:blk.0.ffn_down.weight,       torch.uint8 --> F32, shape = {29360128}
Traceback (most recent call last):
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 3263, in <module>
    main()
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 3257, in main
    model_instance.write()
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 330, in write
    self.write_tensors()
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 1413, in write_tensors
    super().write_tensors()
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 267, in write_tensors
    for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 1410, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\llama.cpp\convert-hf-to-gguf.py", line 185, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'

I encounter similiar issue when I am trying to use this model with Ollama.

It's a 4bit version I guess that's the issue. I guess you could try merging the adapter to the normal llama3-instruct and then run the llama.cpp script again

I am also getting the same error. I fine tuned the "unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit" model, and while merging it with the base model, I got this error. Is it okay to merge the model with normal unsloth/Meta-Lllama-3.1-8B-Instruct model? Please let me know if that would work. I appreciate your help. Thank you!

Having same issue when trying to convert unsloth/mistral-7b-bnb-4bit to gguf
Any help appreciated.

Any solution yet?

Sign up or log in to comment