--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en library_name: diffusers pipeline_tag: text-to-image tags: - Text-to-Image - ControlNet - Diffusers - Flux.1-dev - image-generation - Stable Diffusion base_model: - Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0 --- # FLUX.1-dev-ControlNet-Union-Pro-2.0 (fp8) This repository contains an unified ControlNet for FLUX.1-dev model released by [Shakker Labs](https://huggingface.co/Shakker-Labs). This version has been quantized to FP8 format for optimized inference performance. We provide an [online demo](https://huggingface.co/spaces/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0). # FP8 Quantization This model has been quantized from the original BFloat16 format to FP8 format. The benefits include: - **Reduced Memory Usage**: Approximately 50% smaller model size compared to BFloat16/FP16 - **Faster Inference**: Potential speed improvements, especially on hardware with FP8 support - **Minimal Quality Loss**: Carefully calibrated quantization process to preserve output quality # Keynotes In comparison with [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro), - Remove mode embedding, has smaller model size. - Improve on canny and pose, better control and aesthetics. - Add support for soft edge. Remove support for tile. # Model Cards - This ControlNet consists of 6 double blocks and 0 single block. Mode embedding is removed. - We train the model from scratch for 300k steps using a dataset of 20M high-quality general and human images. We train at 512x512 resolution in BFloat16, batch size = 128, learning rate = 2e-5, the guidance is uniformly sampled from [1, 7]. We set the text drop ratio to 0.20. - This model supports multiple control modes, including canny, soft edge, depth, pose, gray. You can use it just as a normal ControlNet. - This model can be jointly used with other ControlNets. # Showcases
canny
softedge
pose
depth
gray
# Inference ```python import torch from diffusers.utils import load_image from diffusers import FluxControlNetPipeline, FluxControlNetModel base_model = 'black-forest-labs/FLUX.1-dev' controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0' controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16) pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16) pipe.to("cuda") # replace with other conds control_image = load_image("./conds/canny.png") width, height = control_image.size prompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing hair gently tousled by the sea breeze. She wears a soft, pastel-colored dress that complements the tranquil blues and greens of the coastal scenery. The golden hues of the setting sun cast a warm glow on her face, highlighting her serene expression. The background features a vast, azure ocean with gentle waves lapping at the shore, surrounded by distant cliffs and a clear, cloudless sky. The composition emphasizes the girl's serene presence amidst the natural beauty, with a balanced blend of warm and cool tones." image = pipe( prompt, control_image=control_image, width=width, height=height, controlnet_conditioning_scale=0.7, control_guidance_end=0.8, num_inference_steps=30, guidance_scale=3.5, generator=torch.Generator(device="cuda").manual_seed(42), ).images[0] ``` # Multi-Inference ```python import torch from diffusers.utils import load_image # https://github.com/huggingface/diffusers/pull/11350, after merging, you can directly import from diffusers # from diffusers import FluxControlNetPipeline, FluxControlNetModel # use local files for this moment from pipeline_flux_controlnet import FluxControlNetPipeline from controlnet_flux import FluxControlNetModel base_model = 'black-forest-labs/FLUX.1-dev' controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0' controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16) pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=[controlnet], torch_dtype=torch.bfloat16) # use [] to enable multi-CNs pipe.to("cuda") # replace with other conds control_image = load_image("./conds/canny.png") width, height = control_image.size prompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing hair gently tousled by the sea breeze. She wears a soft, pastel-colored dress that complements the tranquil blues and greens of the coastal scenery. The golden hues of the setting sun cast a warm glow on her face, highlighting her serene expression. The background features a vast, azure ocean with gentle waves lapping at the shore, surrounded by distant cliffs and a clear, cloudless sky. The composition emphasizes the girl's serene presence amidst the natural beauty, with a balanced blend of warm and cool tones." image = pipe( prompt, control_image=[control_image, control_image], # try with different conds such as canny&depth, pose&depth width=width, height=height, controlnet_conditioning_scale=[0.35, 0.35], control_guidance_end=[0.8, 0.8], num_inference_steps=30, guidance_scale=3.5, generator=torch.Generator(device="cuda").manual_seed(42), ).images[0] ``` # Recommended Parameters You can adjust controlnet_conditioning_scale and control_guidance_end for stronger control and better detail preservation. For better stability, we highly suggest to use detailed prompt, for some cases, multi-conditions help. - Canny: use cv2.Canny, controlnet_conditioning_scale=0.7, control_guidance_end=0.8. - Soft Edge: use [AnylineDetector](https://github.com/huggingface/controlnet_aux), controlnet_conditioning_scale=0.7, control_guidance_end=0.8. - Depth: use [depth-anything](https://github.com/DepthAnything/Depth-Anything-V2), controlnet_conditioning_scale=0.8, control_guidance_end=0.8. - Pose: use [DWPose](https://github.com/IDEA-Research/DWPose/tree/onnx), controlnet_conditioning_scale=0.9, control_guidance_end=0.65. - Gray: use cv2.cvtColor, controlnet_conditioning_scale=0.9, control_guidance_end=0.8. # Using FP8 Model This repository includes the FP8 quantized version of the model. To use it, you'll need PyTorch with FP8 support: ```python import torch from diffusers.utils import load_image from diffusers import FluxControlNetPipeline, FluxControlNetModel base_model = 'black-forest-labs/FLUX.1-dev' controlnet_model_union_fp8 = 'YOUR_USERNAME/FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8' # Load using FP8 data type controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union_fp8, torch_dtype=torch.float8_e4m3fn) pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16) pipe.to("cuda") # The rest of the code is the same as with the original model ``` See `fp8_inference_example.py` for a complete example. # Pushing Model to Hugging Face Hub To push your FP8 quantized model to the Hugging Face Hub, use the included script: ```bash python push_model_to_hub.py --repo_id "YOUR_USERNAME/FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8" ``` You will need to have the `huggingface_hub` library installed and be logged in with your Hugging Face credentials. # Resources - [InstantX/FLUX.1-dev-IP-Adapter](https://huggingface.co/InstantX/FLUX.1-dev-IP-Adapter) - [InstantX/FLUX.1-dev-Controlnet-Canny](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny) - [Shakker-Labs/FLUX.1-dev-ControlNet-Depth](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth) - [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro) # Acknowledgements This model is developed by [Shakker Labs](https://huggingface.co/Shakker-Labs). The original idea is inspired by [xinsir/controlnet-union-sdxl-1.0](https://huggingface.co/xinsir/controlnet-union-sdxl-1.0). All copyright reserved.