Join our Discord! https://discord.gg/BeaverAI

Nearly 7000 members strong πŸ’ͺ A hub for users and makers alike!


Drummer is open for work / employment (I'm a Software Engineer). Contact me through any of these channels: https://linktr.ee/thelocaldrummer

Thank you to everyone who subscribed through Patreon. Your suppprt helps me chug along in this brave new world.


Drummer proudly presents...

Cydonia 24B v4.1 πŸ’Ώ

image/png

Usage

  • Mistral v7 Tekken

Description

Cydonia Evolved again.

I have to praise this model for good focus. I said earlier that it still remembers it at 12K. I think my personal evaluation has already beaten Loki.

Damn okay this model is actually pretty good. I don't have enough vram to test it on longer chats to 16k, but on 6k chats it's looking good and without deepseek's slop.

Wow, for a 24B this thing has some writing chops. Like it nails mood and nuance and shit with the prose, descriptive but not purple prose. you may have cracked the Cydonias for good with this one. The more I play with it, the more it feels like a level up from the prior ones. Haven't got into long context yet though. My cards tend to favor the opposite or at best neutral. Its rolling with the card, and nailing it, its a bit fallen and its doing good prose to match. Yeah this one's a banger.

Very good. For 24B, the best I've come across. Like even swipes, it stays creative and writes just as well as the swipes before it but doesn't recycle anything from them. It doesn't go overboard on the creativity like Gemma can do, it'll write what you tell it or if RP pick up on things pretty accurately. The prose isn't purple either, it's good.

I dunno how you have broken the spell R1 Cydonia had on me or what made me try this on a whim but you have gold on your hands with this tune. Again.

it really doesn't feel like a mistral tune which is honestly the best compliment I can give it. I'm not getting the usual mistral tuneisms from it.

It's probably the best Cydonia.

image/png

Links

Special Thanks

Hoping to make SleepDeprived proud with this one. RIP.

config-v4j

Downloads last month
5,130
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support