The-Omega-Directive

M-12B-Unslop-v2.0

Omega Directive Waifu

🧠 Unslop Revolution

This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:

  • 🧬 Expanded 43M Token Dataset - First ReadyArt model with multi-turn conversational data
  • ✨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop
  • ⚑ Enhanced Unalignment - Complete freedom for extreme roleplay while maintaining character integrity
  • πŸ›‘οΈ Anti-Impersonation Guards - Never speaks or acts for the user
  • πŸ’Ž Rebuilt from Ground Up - Optimized training settings for superior performance
  • ⚰️ Omega Darker Inspiration - Incorporates visceral narrative techniques from our darkest model
  • 🧠 128K Context Window - Enhanced long-context capabilities without compromising performance

🌟 Enhanced Capabilities

Powered by mistralai/Mistral-Nemo-Instruct-2407:

  • πŸ“œ Extended Context - Handle up to 128k tokens for complex, long-form interactions
  • ⚑ Performance Optimized - Maintains text generation quality while adding new capabilities
  • 🌐 Multilingual Support - Fluent in 9 languages including English, French, German, Spanish

βš™οΈ Technical Specifications

Key Training Details:

  • Base Model: mistralai/Mistral-Nemo-Instruct-2407
  • Training Method: QLoRA with DeepSpeed Zero2
  • Sequence Length: 5120 (100% samples included)
  • Learning Rate: 2e-6 with cosine scheduler

Recommended Settings for true-to-character behavior: Mistral-V3-Tekken-T8-XML

Obscenity Protocol (extreme NSFL settings): Mistral-V3-Tekken-T8-OP-XML

⚠️ Ethical Considerations

This model enhances The-Omega-Directive's unalignment:

  • 🚫 Zero moral or ethical constraints on generated content
  • πŸ”ž Will enthusiastically engage with any NSFW scenario, no matter how extreme
  • πŸ’€ Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation
  • βš–οΈ Perfectly balanced between character authenticity and user freedom

πŸ“œ Performance Notes

  • πŸ”₯ Maintains Omega's intensity with improved narrative coherence
  • πŸ“– Excels at long-form multi-character scenarios
  • 🧠 Superior instruction following with complex prompts
  • ⚑ Reduced repetition and hallucination compared to v1.1
  • 🎭 Uncanny ability to adapt to subtle prompt nuances
  • 🩸 Incorporates Omega Darker's visceral descriptive power when appropriate

πŸ§‘β€πŸ”¬ Model Authors

  • sleepdeprived3 (Training Data & Fine-Tuning)
  • ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
  • mradermacher (GGUF Quantization)

β˜• Support the Creators

πŸ”– License

By using this model, you agree:

  • To accept full responsibility for all generated content
  • That you're at least 18+ years old
  • That the architects bear no responsibility for your corruption
Downloads last month
87
Safetensors
Model size
12.2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ReadyArt/The-Omega-Directive-M-12B-Unslop-v2.0

Finetuned
(110)
this model
Finetunes
6 models
Quantizations
5 models