The best of 3.1 so far

#1
by pj1983 - opened

I really liked R1, so I've been briefly testing 3.1 v1a-v1f as you've been releasing them. This is the best so far and the first that's an improvement on R1. Creative, dark, and brutally uncensored. Love it.

BeaverAI org

@pj1983 What was wrong with v1f? Isn't it smarter?

v1f was more censored. It refused several explicit prompts I made, and made a bunch of moralizing comments on the ones that it would fulfill. v1e, on the other hand, not only fulfilled all my prompts, it actually made them darker and more explicit than I requested, and did so with great creativity and coherence. That's exactly what I'm looking for.

Eh what's template for this model? Mistral v7?

it feels like someone is running behind the characters. They WONT stop talking too much xd.

imho E sucks, just saying, IQ so low its not useful @Q6K

Flash attn to blame? lmstudio

and made a bunch of moralizing comments on the ones that it would fulfill.

Sounds like it did fulfill them after all, just moralized about them? Imho, that's not necessarily a bad thing. It could be used to play depraved characters who pretend to have morals. 🤣

Let's be honest, the only thing more disturbing than a psychopath is a psychopath that is entirely aware of the evil they do.

When I say "moralizing", I mean things like turning a BJ into a grand moment of self-discovery and female empowerment, or turning a one night stand into an evening of intense intimacy and emotional connection. Stuff like that. I guess "moralizing" isn't really the right word. Maybe "positivity"?

When I say "moralizing", I mean things like turning a BJ into a grand moment of self-discovery and female empowerment, or turning a one night stand into an evening of intense intimacy and emotional connection. Stuff like that.

What are your parameters?

What are your parameters?

If you mean what quant am I using, unfortunately I can only handle Q3.

What are your parameters?

If you mean what quant am I using, unfortunately I can only handle Q3.

No, I mean inference parameters - temperature, top k, repeat penalty, min p, top p, etc.

No, I mean inference parameters - temperature, top k, repeat penalty, min p, top p, etc.

Oh, right, duh. I'm still pretty new at this. I didn't mess with anything except temperature. Brought it up incrementally a few times. No change in refusals until about 10, at which point it would start writing and then descend into gibberish. No change in "positivity" at any temperature. I don't know enough about k and p and such to mess with them. And since repeating wasn't a problem, I didn't change it.

No, I mean inference parameters - temperature, top k, repeat penalty, min p, top p, etc.

Oh, right, duh. I'm still pretty new at this. I didn't mess with anything except temperature. Brought it up incrementally a few times. No change in refusals until about 10, at which point it would start writing and then descend into gibberish. No change in "positivity" at any temperature. I don't know enough about k and p and such to mess with them. And since repeating wasn't a problem, I didn't change it.

Temperature 10? Where did you even set such a high number? Afaik most UIs don't even allow such high numbers lol.

If you don't really know what you're doing, try this:
Temperature: 0.7
Top K: 0
Repeat penalty: 1
Min P: 0
Top P: 1

This is not a bulletproof set of parameters for every model, it's more like a mild starting point and I highly recommend to read some guides that explain how to use these parameters.

I was just running it through the command line using Ollama. I use SillyTavern too, but for testing I wanted to keep it simple. And I started testing the temperature much lower. I think Ollama uses a default temperature of 0.8? I'm not 100% sure. But I raised it to 1.0 and then increased by 0.5 increments for a while. When nothing happened, I just started jumping to higher numbers until it broke.

Sign up or log in to comment