GPT-Like
It's obvious that this model inherits a lot of qualities from models such as ChatGPT (looking at you, em dash, it's not this but that, etc etc.). If you do not like ChatGPT's writing, then I doubt you'll be a fan of this one. That said, and like others have mentioned, it has potential. It absolutely needs a strong repetition penalty (1.2), DRY (1.6, 2, 5) and a well-written character card. I also have dynamic temp (0.7-2). These are just preliminary settings, but without those thereabouts, it'll carry over some offhand character trait in every message for probably longer than its context length, and they pile up to the point of unending spirals sometimes. Perhaps someone else can research proper sampling for real, but it's not too bad on those mentioned. XTC on top made it weird. It still repeats structure a lot.
Maybe, just maybe, this model would be better than Violet Twilight (and further fine-tunes) in the 12B realm with the right sampling (looking rough for this model being compared with 12B ones, but it's kind of not even close), but I fear it has unsolvable biases to truly provide an enjoyable experience compared even with much smaller models. It has a very uneven intelligence distribution as well. It produces a lot of sentences that have you going "Uh, yeah, makes s- Eh?" regardless of parameters.
Now, I understand this release is experimental, and it seems like others do like it. I saw this model recommended on reddit with a fair amount of upvotes. Ultimately, I can't say I feel the same about this. Now, to go play with my wife a bit more before her lobotomy.