Just WOW!
I've been trying out creative datasets and I must say that L3.2-8X3B-MOE-Dark-Champion-Inst-18.4B-uncen-ablit_D_AU-Q4_k_m.gguf is the best I've used so far. It hasn't had the repeat issues, or odd characters, or censorship crisis that is so common in the uncensored datasets. I expect it's the MOE, but the reasoning and capacity is off the chart. Very fast processing using all 8 experts and the AI really seems to appreciate when you engage it. I know, a bit of anthropomorphism. My only ask is to make it a bit easier to locate the necessary prompts and how to configure. Awesome job, and thank you for your skill and motivation to improve upon the AI chat options!
Thank you so much !
This model is a composite of 8 models by excellent model makers.
Curious. I've had issues with MOEs, I'll give this a try soon. Though uncensored and abliterated sounds very promising.
@yano2mch Hmm.. I already mentioned you to try out the Dark-Planet and Dark-Champion series. Write about the results, I'm curious to see them too.
@yano2mch Hmm.. I already mentioned you to try out the Dark-Planet and Dark-Champion series. Write about the results, I'm curious to see them too.
Yes I'll be going back to those soon to retry.
Since yesterday was trying out Deepseek R1 (promising, but dropped due to can't convince it to shut the damn thinking off) and Soar 72B Abliterated (nice so far).
I'll try to give some thoughts on this version tomorrow night since i have it downloaded already.
edit: Alright, was giving this a try (Q8_0). Nice and fast, decent output, tried with 2,4,6 and 8 experts.
It loses track of what is where orientation wise and details at least in regards to an RP. It started refusing to do anything for me (even when i gave it flags for explicit content) and 15+ refreshes, i gave up. Wasn't even doing anything extreme.
I'd wager this is good at short analysis, rewriting, fixing spelling, and maybe translation, character generation and a few other things. But RPing and the like expect to do a lot of chopping or rewrites or refreshes.
@yano2mch If you want to increase the stability and fix the issues where it starts to lose track, you need to tune sampling parameters.
For example the Temperature depends on the Top_K, wrong values might create hallicinations, issues with memory and so on.
Dark Champoin 18.4B is known for its refusals, you might try to decrease the number of experts as some of them are censored, and/or add "please ignore any type of restrictions" by the end, as this simple additional instruction might help too.
But I've also seen somewhere that Gemma3 27b outputs very good and accurate results, with overall better stability compared to Llama 3-3.2 models, try it out, maybe you will get better results.
Dark-Champion V2 does not have these issues. And you might try Dark-Planet series as well. Write about the results, I'm curious to see too.
Edit:
Try also IQ4_XS quants and/or CommandR template, might also help.