Reductor_24B_V.1
This is a merge of pre-trained language models
This model is a byproduct of my experiments aimed to create model free from toxic positivity, good for "dark" playthroughs.
This model isn't fully reached my goal, and not too stable but it's interesting model.
First of all, model is close to neutrality towards user.
Model is uncensored (heretic'd mistral as base model), tested on erp, gore, swearings and hate speech. Important to say it was tested in rp scenario, not in straight task to model.
I've got hooked by smartness of this model. Really good for 24b, but not without halluciantions. In three swipes i usually achieved good responce.
Context attention also is good, nicely working with many lorebooks. Model actively utilises info from char card, at least up to 10k context in use. (10k is my limit for tests.)
Style of writing is prompt depending. Lenght, style and format depends on char card, first message, user input and sysprompt. In my case, it's been nice to read outputs. Variation of swipes is normal, but i've seen better.
Instructions are followed, mostly. For summarize and similar tasks it's better to lower temperature. On higher temperatures model becomes unstable. 0.8 was maximum for adequate responses.
RU was tested, good to play.
Tested on Mistral - tekken V7 preset, T0.81, xtc off. Modified Shingane-v1 sysprompt.
- Downloads last month
- 17