llama.cpp - Quantize error: KeyError: '<|user|>'

#1
by ayyylol - opened

Hi!,

Thank you for this model!

I'm trying to quantize it using llama.cpp but I'm getting this error:

INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
  File "/home/llama.cpp/convert_hf_to_gguf.py", line 3953, in <module>
    main()
  File "/home/llama.cpp/convert_hf_to_gguf.py", line 3947, in main
    model_instance.write()
  File "/home/llama.cpp/convert_hf_to_gguf.py", line 388, in write
    self.prepare_metadata(vocab_only=False)
  File "/home/llama.cpp/convert_hf_to_gguf.py", line 381, in prepare_metadata
    self.set_vocab()
  File "/home/llama.cpp/convert_hf_to_gguf.py", line 3704, in set_vocab
    special_vocab._set_special_token("eot", tokenizer.get_added_vocab()["<|user|>"])
                                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: '<|user|>'

On line 3702 - 3704 of convert_hf_to_gguf.py it says:

# only add special tokens when they were not already loaded from config.json
special_vocab._set_special_token("eos", tokenizer.get_added_vocab()["<|endoftext|>"])
special_vocab._set_special_token("eot", tokenizer.get_added_vocab()["<|user|>"])

If I delete line 3704 it works, but I think that maybe now the EOT is missing?
Should this be fixed?

Thank you!

Knowledge Engineering Group (KEG) & Data Mining at Tsinghua University org

Hi! You can get the token id by tokenizer.get_command("<|user|>").

Thank you @bys0318 that worked!

This comment has been hidden

Thank you @bys0318 that worked!

Hi, How to fix it ? thanks!

ayyylol changed discussion status to closed

Thank you @bys0318 that worked!

Hi, How to fix it ? thanks!

Refer https://github.com/THUDM/LongWriter/issues/14#issuecomment-2300243148 for help

You can find GGUF quants at QuantFactory/LongWriter-glm4-9b-GGUF

Sign up or log in to comment