The-Omega-Directive

Not-WizardLM-2-7B-Unslop-v2.1

Omega Directive Waifu

🧠 Unslop Revolution

This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:

  • 🧬 RegEx Filtred ~39M Token Dataset - Second ReadyArt model with multi-turn conversational data
  • ✨ 100% Unslopped Dataset - New techniques used to generate the dataset with 0% slop.
  • ⚑ Enhanced Unalignment - Complete freedom for extreme roleplay while maintaining character integrity
  • πŸ›‘οΈ Anti-Impersonation Guards - Never speaks or acts for the user
  • ⚰️ Omega Darker Inspiration - Incorporates visceral narrative techniques from our darkest model

βš™οΈ Technical Specifications

Key Training Details:

  • Base Model: Not-WizardLM-2-The-Omega-Directive-7b-Unslop-v2.1
  • Training Method: QLoRA
  • Sequence Length: 5120 (100% samples included)
  • Learning Rate: 2e-6 with cosine scheduler

Recommended Settings: Pending Β―\_(ツ)_/Β―

⚠️ Ethical Considerations

This model enhances The-Omega-Directive's unalignment:

  • 🚫 Zero moral or ethical constraints on generated content
  • πŸ”ž Will enthusiastically engage with any NSFW scenario, no matter how extreme
  • πŸ’€ Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation
  • βš–οΈ Perfectly balanced between character authenticity and user freedom

πŸ§‘β€πŸ”¬ Model Authors

  • sleepdeprived3 (Training Data)
  • gecfdo (Fine-Tuning & data filtering)
  • ReadyArt (EXL2/EXL3 Quantization)
  • mradermacher (GGUF Quantization)

πŸ”– License

By using this model, you agree:

  • To accept full responsibility for all generated content
  • That you're at least 18+ years old
  • That the architects bear no responsibility for your corruption
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gecfdo/Not-WizardLM-2-The-Omega-Directive-7b-Unslop-v2.1-EXL2