Erotophobia-24B-v1.1
My first second merge and model ever! Literally depraved.
I made the configuration simpler and replaced the personality model. I think it fits nicely now and produce an actual working model. But since this is my first model, I kept thinking that it's very good. Don't get me wrong, it is "working as intended" but yeah, try it out for yourself!
It is very very VERY obedient, it will do what you tell it to do! Works both on assistant mode and roleplay mode. It has a deep understanding of personality and quite visceral on describing organs, the narrative feel wholesome but explicit when you need it. I personally like it very much and fitting my use case.
Heavily inspired by FlareRebellion/DarkHazard-v1.3-24b and ReadyArt/Omega-Darker-Gaslight_The-Final-Forgotten-Fever-Dream-24B.
I would personally thank you sleepdeprived3 for the amazing finetunes, DoppelReflEx for giving me the dream to make a merge model someday, and the people in BeaverAI Club for the great inspirations.
Luv <3
Recommended Usage
For roleplay mode, use Mistral-V7-Tekken-T4!
I personally use the nsigma
1.5 and temp
4 with the rest neutralized.
A bit silly yeah, but add just a tiny bit of min_p
(if you want) and then turn up the XTC and DRY.
Also try this one too Mistral-V7-Tekken-T5-XML, the system prompt is very nice.
For assistant mode, regular Mistral V7 with a <s>
at the very beginning with blank system prompt should do the trick. (Thanks to Dolphin!)
Quants
Thanks for Artus for providing the Q8 GGUF quants here:
https://huggingface.co/ArtusDev/yvvki_Erotophobia-24B-v1.1-GGUF
Thanks for mradermacher for providing the static and imatrix quants here:
https://huggingface.co/mradermacher/Erotophobia-24B-v1.1-GGUF
https://huggingface.co/mradermacher/Erotophobia-24B-v1.1-i1-GGUF
Safety
erm... :3
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the DARE TIES merge method using cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition as a base.
Models Merged
The following models were included in the merge:
- ReadyArt/Omega-Darker_The-Final-Directive-24B
- aixonlab/Eurydice-24b-v3
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
- ReadyArt/Forgotten-Safeword-24B-v4.0
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
tokenizer:
source: union
chat_template: auto
models:
- model: cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition # uncensored
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b # personality
parameters:
weight: 0.3
- model: aixonlab/Eurydice-24b-v3 # creativity & storytelling
parameters:
weight: 0.3
- model: ReadyArt/Omega-Darker_The-Final-Directive-24B # unhinged
parameters:
weight: 0.2
- model: ReadyArt/Forgotten-Safeword-24B-v4.0 # lube
parameters:
weight: 0.2
parameters:
density: 0.3
- Downloads last month
- 14
Model tree for ArtusDev/yvvki_Erotophobia-24B-v1.1_EXL2_4.0bpw_H8
Base model
yvvki/Erotophobia-24B-v1.1