Text Generation
Transformers
Safetensors
mistral
conversational
text-generation-inference

I find it interesting with XTC sampler

#2
by Laetilia - opened

In my humble opinion/experience, this model works better with addition of XTC sampler (as surprising as always).
I also think that this model has unusual, even if flawed, style. I find it refreshing, interesting to (role)play with.
More specifically, I currently use this model with the following settings (via llama-cli; part of llama.cpp)...

-c 32768 --top-p 1.0 --dry-multiplier 0.8 --dry-allowed-length 4 --min-p 0.03 --xtc-probability 0.1 --temp 0.6 --chat-template mistral-v7-tekken

Also, I've tried two different quants of this model - i1-IQ4_XS (by mradermacher) and Q6_K_L (by bartowski).
The style of first quant I think is more vivid (which is the strength of this model, in my humble opinion).
But the larger quant actually knows what is happening significantly better (which I personally liked more).
Of course, there are quants in-between; I am yet to try them out, they may or may not feel nicer, IDK. =p

As to the model's weaknesses, I think these are: repetition, repetition (it arises after some time/messages), tendency to give dialogue replies with my own (even non-dialogue) words, overall confusion about characters and concepts (less with XTC and at higher quant), repetition (XTC helps a bit).

Overall: thank you for an unusual model! ^_^

All this is my personal opinion; opinions (and model usage) of different people can differ.


Why does XTC helps to stabilize some models (few; in my opinion)??? It isn't meant to. It is literally "specifically for creativity" type of sampler. And yet, sometimes a pinch of chaos is all you need, it seems. LLMs are like dark magic, I tell ya...

Owner

Thank you! This is a interesting review.

This model was a big experiment that was more likely to lobotomize the model as make it good, so glad it has turned out well (excluding the repetition).

The token probabilities on this are really weird, so that lines up with XTC I think. When I was checking to try and figure out repetition, it was quite often there was multiple high probability options that all looked stable that XTC would've been quite effective at picking (I think when XTC activates, it also stops the model from picking the super low probability options, thus why it might actually become more stable from it.)

I may try this experiment on another model that doesn't natively have the repetition issues in the future and see if I can re-create the oddly coherent chaotic experience without the downsides, or when I get the motivation to grow the dataset out.

Thank you for the explanation of XTC effects!

Sign up or log in to comment