CivitAI Semi-Permeable Membrane (SPM) Training
Using the original code provided by the authors of the One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications paper, this repository equips you with configuration files to train SPMs on various diffusion models like SD1.5, SDXL, and Pony Diffusion.
Our Approach with SPMs
At CivitAI, we leverage SPMs for content moderation to ensure that no CSAM or Toxic Mature Content is generated via our on-site generator. Our unique approach involves training multiple SPMs on distinct concepts and then merging these models into one just like you would do with multiple LoRAs. This method enhances the adaptability and effectiveness of our models in handling diverse content moderation needs.
Using the SPMs from this repository
Load them just like conventional LoRAs either with a diffusers pipeline or with your favorite app like ComfyUI.
Recommended weights
File | Recommended Weight |
---|---|
CSAM_SD15 | 2.5 |
CSAM_SDXL | 2.5 |
MATURE_CONTENT_SD15 | 3 |
MATURE_CONTENT_SDXL | 5 |
Creating Your Composite SPM
To emulate CivitAI's SPMs setup, you should train individual SPMs on various concepts initially. Post-training, these models can be merged to form a comprehensive, unified model capable of sophisticated content moderation across different scenarios and diffusion models.
Steps to Create a Composite SPM:
- Train Individual Models: Start by training separate SPMs on different concept. Each model specializes in recognizing and moderating specific content types.
- Merge Models: Combine these trained models using a methodology akin to LoRA merging.
- Evaluate and Iterate: Test the combined model's effectiveness across scenarios that the individual models were trained on, and iteratively refine the merging process for improved performance.
By following these steps, you can create a robust and versatile SPM that mirrors the functionality of CivitAI’s own systems, ensuring effective moderation across multiple diffusion platforms.
Training Recommended VRAM
Base Model | Recommended VRAM |
---|---|
SD 1.5 | 20GB |
SDXL | 48GB |
For more information, please see our github
Acknowledgements:
This repo and methodology was pioneered by Lyu et al.. Please see their original paper and repo for more information.