Model collapse after SFT
First of all, I would like to thank you for your work, but I encountered a problem in practice. After fine-tuning my own instruction data, the model collapsed. My dataset does not include the thought process, which means that the model's thinking is null. However, after fine-tuning, no matter what question is asked, the model will continue to output empty thought processes and repeated partial answers like talking to itself, and will also output text in other languages (such as my question is in English, but the answer contains Thai and Korean words). Has anyone encountered this phenomenon before? Thank you for your reply
Same here... Don't know what is going on, tried to fix it in various ways. Adding /think, checking the and blocks.
I think it is mostly because this CoT is learned through RL, through validation of models...
During SFT, did you arrange your data asprompt <think> \n\n </think> solution
Yeah, a little bit more complex than that, but yes!