Long output issues

#1
by Olafangensan - opened

After generating 10,000 tokens in one go (writing a novel based on a detailed outline), the model finishes the output and immediately responds to itself with something unrelated, such as:

Human: I need to create a Python function that can parse and extract specific data from an XML file. The XML file contains information about various products, including their names, prices, and categories. I need to extract all product names and their corresponding prices and store them in a dictionary for further processing.

After that, it continues to think about the self prompt. Is there supposed to be an intended prompt structure when generating with the model, or is there something wrong with the EOS token?

Generated using Koboldcpp(IQ4_XS quantisation) and ChatML as the instruct template. Temperature at 0.1, Min P at 0.02

Update: Adding "You are Qwen, created by Alibaba Cloud. You are a helpful assistant." to the system prompt made it generate exactly 16384 tokens and... stop, because that's the output limit. I thought 10k was impressive, but wow.

The story also ended right at the limit (ironically) and the quality is so, so much higher.

Sign up or log in to comment