Text Generation
Transformers
PyTorch
English
mistral
conversational
text-generation-inference

Not following system prompt

#28
by wehapi - opened

As mentioned in the title, the model is not following system prompt.
I added instructions to system prompt to get data from the users' prompts and return it as JSON object but it could not succeeded.
Sometimes it returns me truncated JSON data (not reached max tokens), sometimes returns string that could not be parsed as JSON.
Is there any different style or format to create system prompt in this.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment