Consider adding <start_of_context> and <stop_of_context> or similar special tokens for context ingestion.

#13
by qnixsynapse - opened

Gemma 1.1 is a serious upgrade over the earlier version and I was able to made it work with documents which required some amount of tweaking on the prompt format:

image.png

Adding special tokens for context as part of the prompt format will allow the model to differentiate between the context and the query which currently confuses the model a bit.

IMO, tiny models can benefit from it and it will also reduce hallucination.

This is my suggestion. :)

Google org

Hi @qnixsynapse , Its very insightful observation! It's great to hear that you're finding Gemma 1.1 to be a significant improvement and that you're actively experimenting with prompt engineering to get even better results. Thank you again for taking the time to share your thoughtful suggestion! Keep the feedback coming.

Sign up or log in to comment