Metharne is a fake preset.

#6
by jackboot - opened

It really is. None of these models were likely trained on it. That said, it works on monstral. But which metharne?

I have already found that adding a dummy user message has helped in most presets for model intelligence.. but whether to add </s> for the model/user or to add \n or nothing is quite up in the air.

The most "mistral" like seems to be to add </s> only on the model's output. Messages are longer. More chance for slop.
When I add </s> to both user and model output though, even more alignment goes away and things become creative. Replies are a bit more curt though.

I was all about putting nothing in user/assistant suffix until I it started writing for me and dropping <|assistant|>, <|assistant> and <|user> tags in longer chats. I tried returns but that didn't fix it as much as I'd like. The </s> has been better.

Since we're all about prompt engineering here... What has worked for you? The only objective feedback I can offer is that with dual </s> the model is twice as likely to successfully output desuposting simulator replies. That card really tests local models. Still, that's only one measure.

Honestly, I have no clue what you're talking about, apologies. I based my "Metharne" preset on what Pygmalion uses since I thought that was the point of it, but I never tested it.

Lmao, I'm talking about this:

preset.png

Adding closing tags there vs not makes a difference in output.

None of these models were likely trained on it.

Incorrect, @TheDrummer is a dumbass /loving and trained most of his modern models on metharme/pygmalion format

You're right, he did. The idea itself isn't bad. You sidestep a lot of previous training for better or worse. Less disclaimers but also less intelligence or instruction following. Monstral is a completely different model between mistral, metharne and whatever else.

Sign up or log in to comment