wanghaofan's picture
Update README.md
188a266 verified
metadata
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
  - en
library_name: diffusers
pipeline_tag: text-to-image
tags:
  - Text-to-Image
  - ControlNet
  - Diffusers
  - Flux.1-dev
  - image-generation
  - Stable Diffusion
base_model: black-forest-labs/FLUX.1-dev

FLUX.1-dev-ControlNet-Union-Pro-2.0

This repository contains an unified ControlNet for FLUX.1-dev model released by Shakker Labs. We provide an online demo. A FP8 quantized version provided by community can be found in ABDALLALSWAITI/FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8.

Keynotes

In comparison with Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro,

  • Remove mode embedding, has smaller model size.
  • Improve on canny and pose, better control and aesthetics.
  • Add support for soft edge. Remove support for tile.

Model Cards

  • This ControlNet consists of 6 double blocks and 0 single block. Mode embedding is removed.
  • We train the model from scratch for 300k steps using a dataset of 20M high-quality general and human images. We train at 512x512 resolution in BFloat16, batch size = 128, learning rate = 2e-5, the guidance is uniformly sampled from [1, 7]. We set the text drop ratio to 0.20.
  • This model supports multiple control modes, including canny, soft edge, depth, pose, gray. You can use it just as a normal ControlNet.
  • This model can be jointly used with other ControlNets.

Showcases

canny
softedge
pose
depth
gray

Inference

import torch
from diffusers.utils import load_image
from diffusers import FluxControlNetPipeline, FluxControlNetModel

base_model = 'black-forest-labs/FLUX.1-dev'
controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0'

controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
pipe.to("cuda")

# replace with other conds
control_image = load_image("./conds/canny.png")
width, height = control_image.size

prompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing hair gently tousled by the sea breeze. She wears a soft, pastel-colored dress that complements the tranquil blues and greens of the coastal scenery. The golden hues of the setting sun cast a warm glow on her face, highlighting her serene expression. The background features a vast, azure ocean with gentle waves lapping at the shore, surrounded by distant cliffs and a clear, cloudless sky. The composition emphasizes the girl's serene presence amidst the natural beauty, with a balanced blend of warm and cool tones."

image = pipe(
    prompt, 
    control_image=control_image,
    width=width,
    height=height,
    controlnet_conditioning_scale=0.7,
    control_guidance_end=0.8,
    num_inference_steps=30, 
    guidance_scale=3.5,
    generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]

Multi-Inference

import torch
from diffusers.utils import load_image

# https://github.com/huggingface/diffusers/pull/11350
# You can directly import from diffusers by install the laster version from source
# from diffusers import FluxControlNetPipeline, FluxControlNetModel

# use local files for this moment
from pipeline_flux_controlnet import FluxControlNetPipeline
from controlnet_flux import FluxControlNetModel

base_model = 'black-forest-labs/FLUX.1-dev'
controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0'

controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=[controlnet], torch_dtype=torch.bfloat16) # use [] to enable multi-CNs
pipe.to("cuda")

# replace with other conds
control_image = load_image("./conds/canny.png")
width, height = control_image.size

prompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing hair gently tousled by the sea breeze. She wears a soft, pastel-colored dress that complements the tranquil blues and greens of the coastal scenery. The golden hues of the setting sun cast a warm glow on her face, highlighting her serene expression. The background features a vast, azure ocean with gentle waves lapping at the shore, surrounded by distant cliffs and a clear, cloudless sky. The composition emphasizes the girl's serene presence amidst the natural beauty, with a balanced blend of warm and cool tones."

image = pipe(
    prompt, 
    control_image=[control_image, control_image], # try with different conds such as canny&depth, pose&depth
    width=width,
    height=height,
    controlnet_conditioning_scale=[0.35, 0.35],
    control_guidance_end=[0.8, 0.8],
    num_inference_steps=30, 
    guidance_scale=3.5,
    generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]

Recommended Parameters

You can adjust controlnet_conditioning_scale and control_guidance_end for stronger control and better detail preservation. For better stability, we highly suggest to use detailed prompt, for some cases, multi-conditions help.

  • Canny: use cv2.Canny, controlnet_conditioning_scale=0.7, control_guidance_end=0.8.
  • Soft Edge: use AnylineDetector, controlnet_conditioning_scale=0.7, control_guidance_end=0.8.
  • Depth: use depth-anything, controlnet_conditioning_scale=0.8, control_guidance_end=0.8.
  • Pose: use DWPose, controlnet_conditioning_scale=0.9, control_guidance_end=0.65.
  • Gray: use cv2.cvtColor, controlnet_conditioning_scale=0.9, control_guidance_end=0.8.

Resources

Acknowledgements

This model is developed by Shakker Labs. The original idea is inspired by xinsir/controlnet-union-sdxl-1.0. All copyright reserved.

Citation

If you find this project useful in your research, please cite us via

@misc{flux-cn-union-pro-2,
    author = {Shakker-Labs},
    title = {https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0},
    year = {2025},
}
}