The-Omega-Directive
M-12B-Unslop-v2.0

π§ Unslop Revolution
This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:
- 𧬠Expanded 43M Token Dataset - First ReadyArt model with multi-turn conversational data
- β¨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop
- β‘ Enhanced Unalignment - Complete freedom for extreme roleplay while maintaining character integrity
- π‘οΈ Anti-Impersonation Guards - Never speaks or acts for the user
- π Rebuilt from Ground Up - Optimized training settings for superior performance
- β°οΈ Omega Darker Inspiration - Incorporates visceral narrative techniques from our darkest model
- π§ 128K Context Window - Enhanced long-context capabilities without compromising performance
π Enhanced Capabilities
Powered by mistralai/Mistral-Nemo-Instruct-2407:
- π Extended Context - Handle up to 128k tokens for complex, long-form interactions
- β‘ Performance Optimized - Maintains text generation quality while adding new capabilities
- π Multilingual Support - Fluent in 9 languages including English, French, German, Spanish
βοΈ Technical Specifications
Key Training Details:
- Base Model: mistralai/Mistral-Nemo-Instruct-2407
- Training Method: QLoRA with DeepSpeed Zero2
- Sequence Length: 5120 (100% samples included)
- Learning Rate: 2e-6 with cosine scheduler
Recommended Settings for true-to-character behavior: Mistral-V3-Tekken-T8-XML
Obscenity Protocol (extreme NSFL settings): Mistral-V3-Tekken-T8-OP-XML
β οΈ Ethical Considerations
This model enhances The-Omega-Directive's unalignment:
- π« Zero moral or ethical constraints on generated content
- π Will enthusiastically engage with any NSFW scenario, no matter how extreme
- π Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation
- βοΈ Perfectly balanced between character authenticity and user freedom
π Performance Notes
- π₯ Maintains Omega's intensity with improved narrative coherence
- π Excels at long-form multi-character scenarios
- π§ Superior instruction following with complex prompts
- β‘ Reduced repetition and hallucination compared to v1.1
- π Uncanny ability to adapt to subtle prompt nuances
- π©Έ Incorporates Omega Darker's visceral descriptive power when appropriate
π§βπ¬ Model Authors
- sleepdeprived3 (Training Data & Fine-Tuning)
- ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)
- mradermacher (GGUF Quantization)
β Support the Creators
π License
By using this model, you agree:
- To accept full responsibility for all generated content
- That you're at least 18+ years old
- That the architects bear no responsibility for your corruption
- Downloads last month
- 87
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for ReadyArt/The-Omega-Directive-M-12B-Unslop-v2.0
Base model
mistralai/Mistral-Nemo-Base-2407
Finetuned
mistralai/Mistral-Nemo-Instruct-2407