My firs impression
This model is a very pleasant surprise. I like that it always writes as much as it needs to.
It has an interesting but pleasant style that is different from other LLMs I have used.
The modell skilfully and well pursues your plots, and I like that he spices up the roleplay by noting the little details.
As RP progresses, the model's adherence to character traits deteriorates significantly (after 4000token context), which is surprisingly bad for a 24b model.
This model is it ignored parts of the character description content several times. I brought this to the model's attention (OOC) and the model admitted that it was wrong.
As RP progresses, the model's adherence to character traits deteriorates significantly (after 4000token context), which is surprisingly bad for a 24b model.
This model is it ignored parts of the character description content several times. I brought this to the model's attention (OOC) and the model admitted that it was wrong.
Same experience here.
EDIT:
Sometimes it works pretty well, but most of the time it starts to forget stuff, not even a few messages in... Woops, wrong repo. I don't know why Huggingface keeps sending me to this one. I'm experiencing this with I'm using zerofata_MS3.2-PaintedFantasy-Visage-33B-Q6_K_L.kcpps from
One of the best 22b/24b models I've tried lately. Consistent and with a nice dosage of creativity, action and something I've seen other models lack: char proactivity bounded by their traits... until certain context threshold (in my case, around 8K-9K). Not only it tends to forget char traits, starts to lose consistency and time-spacial awareness - specially when more "npc" or actions happen in different places. It also starts to repeat text, phrases and patterns. If these issues could be addressed, we could have between hands one of the best - if not the best 24b model to Rp. (IMO), this potential "fixed" version could be better even that ChatWaifu 22b, Gaslit Abomination 24b and BrokenTutu 24b (my favs in the last months).
BTW I'm using mradermacher imatrix quants (Q4_K_M & Q6_K).