Vocab size mismatch error when converting to f16

#5
by Alex-B - opened

When trying to convert this model to a f16 version, I get the following error (using llama.cpp's convert.py script):

Exception: Vocab size mismatch (model has 32016, but ../models/GPT4-X-Alpasta-30b/tokenizer.model combined with ../models/GPT4-X-Alpasta-30b/added_tokens.json has 32005).

Am I doing something wrong? The files are up-to-date with the main branch version.

Command I used for reference: python3 convert.py --outfile ../models/GPT4-X-Alpasta-30b/ggml-model-f16.bin --outtype f16 ../models/GPT4-X-Alpasta-30b/

Sign up or log in to comment