Broken-Tutu-24B-Transgression-v2.0

Enhanced coherence with reduced explicit content

Broken Tutu Character

🧠 Transgression Techniques

This evolution of Broken-Tutu delivers unprecedented coherence with reduced explicit content using classic "Transgression" techniques:

  • 🧬 Expanded 43M Token Dataset - First ReadyArt model with multi-turn conversational data
  • ✨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop
  • ⚑ Enhanced Character Integrity - Maintains character authenticity while reducing explicit content
  • πŸ›‘οΈ Anti-Impersonation Guards - Never speaks or acts for the user
  • πŸ’Ž Rebuilt from Ground Up - Optimized training settings for superior performance
  • πŸ“œ Direct Evolution - Leveraging the success of Broken-Tutu, we finetuned directly on top of the legendary model

🌟 Fuel the Revolution

This model represents thousands of hours of passionate development. If it enhances your experience, consider supporting our work:

Every contribution helps us keep pushing boundaries in AI. Thank you for being part of the revolution!

βš™οΈ Technical Specifications

Key Training Details:

  • Base Model: mistralai/Mistral-Small-24B-Instruct-2501
  • Training Method: QLoRA with DeepSpeed Zero3
  • Sequence Length: 5120 (100% samples included)
  • Learning Rate: 2e-6 with cosine scheduler

⚠️ Ethical Considerations

This model maintains character integrity while reducing explicit content:

  • βš–οΈ Balanced approach to character authenticity and content appropriateness
  • πŸ”ž Reduced explicit content generation compared to previous versions
  • πŸ’€ Characters maintain their core traits - wholesome characters remain wholesome, yanderes remain intense
  • 🧠 Improved focus on narrative coherence and storytelling

πŸ“œ Performance Notes

  • πŸ”₯ Maintains Broken-Tutu's intensity with improved narrative coherence
  • πŸ“– Excels at long-form multi-character scenarios
  • 🧠 Superior instruction following with complex prompts
  • ⚑ Reduced repetition and hallucination compared to v1.1
  • 🎭 Uncanny ability to adapt to subtle prompt nuances
  • πŸ–ΌοΈ Enhanced image understanding capabilities for multimodal interactions

πŸ§‘β€πŸ”¬ Model Authors

  • sleepdeprived3 (Training Data & Fine-Tuning)
  • ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
  • mradermacher (GGUF Quantization)

β˜• Support the Creators

πŸ”– License

By using this model, you agree:

  • To accept full responsibility for all generated content
  • That you're at least 18+ years old
  • That the architects bear no responsibility for your use of the model
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gecfdo/Broken-Tutu-24B-Transgression-v2.0-EXL3