ControlNet v1.1 Models And Links To Compatible Stable Diffusion v1.5 Type Models Converted To Apple CoreML Format

For use with a Swift app like MOCHI DIFFUSION or the SwiftCLI

All of the models in this repo work with Swift and the apple/ml-stable-diffusion pipeline (release 0.4.0 or 1.0.0). They were not built for, and will not work with, a Python Diffusers pipeline. They need ml-stable-diffusion for command line use, or a Swift app that supports ControlNet, such as the (June 2023) MOCHI DIFFUSION 4.0 version.

The ControlNet models in this repo have both "Original" and "Split-Einsum" versions, all built for SD-1.5 type models. They will not work with SD-2.1 type models. The smaller zip files, with "SE", each have a single model for "Split-Einsum". The larger zip files, without "SE", each have a set of "Original" models at 4 different resolutions.

The ControlNet model files are in the "CN" folder of this repo. They are zipped and need to be unzipped after downloading. The larger zips hold "Original" types at 512x512, 512x768, 768x512 and 768x768. The smaller zips with "SE" have a single model for "Split-Einsum".

If you are using a GUI like MOCHI DIFFUSION 4.0, the app will most likely guide you to the correct location/arrangement for your ConrolNet model folder.

Please note that when you unzip the "Originl" ControlNet files (for example Canny.zip) from this repo, they will unzip into a folder, with the actual four model files inside that folder. This folder is just a holding folder for the zipping process. What you want to move into your ControlNet model folder in Mochi Diffusion will be the individual files, not the folder they unzip into. The "Split-Einsum" zips just have a single file and don't use a holding folder. To make things even more confusing, on some Mac systems, an individual ControlNet model file, for example Canny-5x5.mlmodelc, will appear in Finder as a folder, not a file. You want to move the Canny-5x5.mlmodelc file or folder (and other .mlmodelc files or folders) into your ControlNet store folder. Don't move the plain "Canny" folder. This is different from base models, where you do want to be moving the folder that the downloaded zip file unzips into. See the images here and here for an example of how my folders are set up for Mochi Diffusion.

The SD models (base models) linked at the bottom of this page were relocated from this repo to individual model repos at the CORE ML MODELS COMMUNITY repo. The links will take you directly to each model. They are for "Original" and "Split-Einsum".

The Stable Diffusion v1.5 model and the other SD 1.5 type models contain both the standard Unet and the ControlledUnet used for a ControlNet pipeline. The correct one will be used automatically based on whether a ControlNet is enabled or not.

They have VAEEncoder.mlmodelc bundles that allow Image2Image to operate correctly at the noted resolutions, when used with a current Swift CLI pipeline or a current GUI built with ml-stable-diffusion 0.4.0 or ml-stable-diffusion 1.0.0, such as MOCHI DIFFUSION 4.0, or later.

The sizes noted for all model type inputs/outputs are WIDTH x HEIGHT. A 512x768 is "portrait" orientation and a 768x512 is "landscape" orientation.

There is also a "MISC" folder at this repo that has text files with some notes and a screencap of my directory structure. These are provided for those who want to convert models themselves and/or run the models with a SwiftCLI. The notes are not perfect, and may be out of date if any of the Python or CoreML packages referenced have been updated recently. You can open a Discussion here if you need help with any of the "MISC" items.

NOTE: At present, it appears that the python_coreml_stable_diffusion package from ml-stable-diffusion 6.3 is the latest version that will convert ControlNet models that work correctly. Conversions with python_coreml_stable_diffusion 1.0.0, which is from ml-stable-diffusion 7.0b1 or 7.0b2, will throw errors when used.

For command line use, the "MISC" notes cover setting up a miniconda3 environment. If you are using the command line, please read the notes concerning naming and placement of your ControlNet model folder. Briefly, they will need to go inside a "controlnet" folder that you placed inside your base model folder. You'll need a "controlnet" folder inside each base model folder, or a symlink named "controlnet" pointing to a central folder with all your ControlNet models inside it.

If you encounter any models in this repo that do not work correctly with ControlNet, using the current apple/ml-stable-diffusion SwiftCLI pipeline, or Mochi Diffusion 4.0, please leave a report in the Community Discussion area. If you would like to add models that you have converted, leave a message there as well, and we will grant you access to the appropriate repo.

ControlNet Models - All Current SD-1.5-Type ControlNet Models

Each larger zip file contains a set of 4 "Original" types at resolutions of 512x512, 512x768, 768x512 and 768x768. Each smaller zip file, with the "SE" notation, contains a single "Split-Einsum" file.

  • Canny -- Edge Detection, Outlines As Input
  • Depth -- Reproduces Depth Relationships From An Image
  • InPaint -- Use Masks To Define And Modify An Area (not sure how this works)
  • InstrP2P -- Instruct Pixel2Pixel - "Change X to Y"
  • LineAnime -- Find And Reuse Small Outlines, Optimized For Anime
  • LineArt -- Find And Reuse Small Outlines
  • MLSD -- Find And Reuse Straight Lines And Edges
  • NormalBAE -- Reproduce Depth Relationships Using Surface Normal Depth Maps
  • OpenPose -- Copy Body Poses
  • Scribble -- Freehand Sketch As Input
  • Segmentation -- Find And Reuse Distinct Areas
  • Shuffle -- Find And Reorder Major Elements
  • SoftEdge -- Find And Reuse Soft Edges
  • Tile -- Subtle Variations Within Batch Run

Base Models - A Variety Of SD-1.5-Type Models Compatible With ControlNet

Other ControlNet Compatible Base Models Are Listed At CORE ML MODELS COMMUNITY

Look for "_cn" at the end of the names!

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .