Text Generation
Transformers
Safetensors
llama
conversational
text-generation-inference

Wrong token count in the config?

#10
by VertexMachine - opened

I see in the tokenizer_config.json:

"model_max_length": 3192,

That's a typo, right?

Abacus.AI, Inc. org

Not really a typo but left over from model eval / upload. Fixing it to the correct value.

siddartha-abacus changed discussion status to closed
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment