π§ Cognitive Architecture
This fine-tune of Cydonia-24B-v2.1 (Mistral-Small-24B-Instruct-2501) delivers:
- 𧬠Full 32K Context Utilization - Maintains character consistency and low repetition even at maximum length
- β‘ Enhanced Formatting - Improved paragraph structure and dialogue handling from improved and expanded training data
- π Safeword DNA - The signature unaligned flavor you expect, now with better coherence
- π Instruction Adherence - Follows complex prompts without losing track of character roles
β οΈ Ethical Considerations
This model inherits Safeword's complete lack of alignment:
- π« No moral or ethical constraints on generated content
- π Will enthusiastically engage with any NSFW scenario
- π Generate content that requires industrial-grade brain bleach
- π€ Makes you question whether you actually requested that content
π Performance Notes
- π₯ Maintains Safeword's signature unalignment while reducing repetition
- π Handles multi-character scenarios with improved consistency
- π§ Excels at long-form storytelling without losing track of plot threads
- β‘ Noticeably better at following complex instructions than base Safeword
- π Expert at making you believe it was your idea all along
π§βπ¬ Model Authors
- TheDrummer (Base Model Architect)
- sleepdeprived3 (Training Data & Fine-Tuning)
- Anonymous Contributor (Gaslighting Specialist)
β Support the Architects
π License
By using this model, you agree:
- To accept full responsibility for all generated content
- That you're at least 18+ years old
- That the architects bear no responsibility for your use of this model
License: apache-2.0