Broken-Tutu-24B-Transgression-v2.0
Enhanced coherence with reduced explicit content

π§ Transgression Techniques
This evolution of Broken-Tutu delivers unprecedented coherence with reduced explicit content using classic "Transgression" techniques:
- 𧬠Expanded 43M Token Dataset - First ReadyArt model with multi-turn conversational data
- β¨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop
- β‘ Enhanced Character Integrity - Maintains character authenticity while reducing explicit content
- π‘οΈ Anti-Impersonation Guards - Never speaks or acts for the user
- π Rebuilt from Ground Up - Optimized training settings for superior performance
- π Direct Evolution - Leveraging the success of Broken-Tutu, we finetuned directly on top of the legendary model
π Fuel the Revolution
This model represents thousands of hours of passionate development. If it enhances your experience, consider supporting our work:
Every contribution helps us keep pushing boundaries in AI. Thank you for being part of the revolution!
βοΈ Technical Specifications
Key Training Details:
- Base Model: mistralai/Mistral-Small-24B-Instruct-2501
- Training Method: QLoRA with DeepSpeed Zero3
- Sequence Length: 5120 (100% samples included)
- Learning Rate: 2e-6 with cosine scheduler
Recommended Settings for true-to-character behavior: Mistral-V7-Tekken-T8-XML
GGUF
Notes: Q4_K_S/Q4_K_M recommended for speed/quality balance. Q6_K for high quality. Q8_0 best quality.
imatrix
Notes: Q4_K_S/Q4_K_M recommended. IQ1_S/IQ1_M for extreme low VRAM. Q6_K for near-original quality.
EXL2
EXL3
AWQ
β οΈ Ethical Considerations
This model maintains character integrity while reducing explicit content:
- βοΈ Balanced approach to character authenticity and content appropriateness
- π Reduced explicit content generation compared to previous versions
- π Characters maintain their core traits - wholesome characters remain wholesome, yanderes remain intense
- π§ Improved focus on narrative coherence and storytelling
π Performance Notes
- π₯ Maintains Broken-Tutu's intensity with improved narrative coherence
- π Excels at long-form multi-character scenarios
- π§ Superior instruction following with complex prompts
- β‘ Reduced repetition and hallucination compared to v1.1
- π Uncanny ability to adapt to subtle prompt nuances
- πΌοΈ Enhanced image understanding capabilities for multimodal interactions
π§βπ¬ Model Authors
- sleepdeprived3 (Training Data & Fine-Tuning)
- ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
- mradermacher (GGUF Quantization)
β Support the Creators
π License
By using this model, you agree:
- To accept full responsibility for all generated content
- That you're at least 18+ years old
- That the architects bear no responsibility for your use of the model
Model tree for gecfdo/Broken-Tutu-24B-Transgression-v2.0-EXL2
Base model
mistralai/Mistral-Small-24B-Base-2501