Reynoldyy commited on
Commit
1fa3701
·
1 Parent(s): 55d4287

add model card

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md CHANGED
@@ -1,3 +1,166 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Implementation of EasyControl
5
+
6
+ EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer
7
+
8
+ <a href='https://arxiv.org/pdf/2503.07027'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
9
+ <a href="https://github.com/Xiaojiu-z/EasyControl/tree/dev"><img src="https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github&" alt="GitHub"></a>
10
+
11
+ > *[Yuxuan Zhang](https://xiaojiu-z.github.io/YuxuanZhang.github.io/), [Yirui Yuan](https://github.com/Reynoldyy), [Yiren Song](https://scholar.google.com.hk/citations?user=L2YS0jgAAAAJ), [Haofan Wang](https://haofanwang.github.io/), [Jiaming Liu](https://scholar.google.com/citations?user=SmL7oMQAAAAJ&hl=en)*
12
+ > <br>
13
+ > Tiamat AI, ShanghaiTech University, National University of Singapore, Liblib AI
14
+
15
+ <img src='assets/teaser.jpg'>
16
+
17
+ ## Features
18
+ * **Motivation:** The architecture of diffusion models is transitioning from Unet-based to DiT (Diffusion Transformer). However, the DiT ecosystem lacks mature plugin support and faces challenges such as efficiency bottlenecks, conflicts in multi-condition coordination, and insufficient model adaptability, particularly in zero-shot multi-condition combination scenarios where these issues are most pronounced.
19
+ * **Contribution:** We propose EasyControl, an efficient and flexible unified conditional DiT framework. By incorporating a lightweight Condition Injection LoRA module, a Position-Aware Training Paradigm, and a combination of Causal Attention mechanisms with KV Cache technology, we significantly enhance model compatibility, generation flexibility, and inference efficiency.
20
+ <img src='assets/method.jpg'>
21
+
22
+ ## Download
23
+
24
+ You can download the model directly from [Hugging Face](https://huggingface.co/EasyControl/EasyControl).
25
+ Or download using Python script:
26
+
27
+ ```python
28
+ from huggingface_hub import hf_hub_download
29
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/canny.safetensors", local_dir="./models")
30
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/depth.safetensors", local_dir="./models")
31
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/hedsketch.safetensors", local_dir="./models")
32
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/inpainting.safetensors", local_dir="./models")
33
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/pose.safetensors", local_dir="./models")
34
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/seg.safetensors", local_dir="./models")
35
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/subject.safetensors", local_dir="./models")
36
+ ```
37
+
38
+ If you cannot access Hugging Face, you can use [hf-mirror](https://hf-mirror.com/) to download the models:
39
+ ```python
40
+ export HF_ENDPOINT=https://hf-mirror.com
41
+ huggingface-cli download --resume-download Xiaojiu-Z/EasyControl --local-dir checkpoints --local-dir-use-symlinks False
42
+ ```
43
+
44
+ ## Usage
45
+ Here's a basic example of using EasyControl. For more details, please follow the instructions in our [__GitHub repository__](https://github.com/Xiaojiu-z/EasyControl):
46
+
47
+ ### Model Initialization
48
+
49
+ ```python
50
+ import torch
51
+ from PIL import Image
52
+ from src.pipeline import FluxPipeline
53
+ from src.transformer_flux import FluxTransformer2DModel
54
+ from src.lora_helper import set_single_lora, set_multi_lora
55
+
56
+ def clear_cache(transformer):
57
+ for name, attn_processor in transformer.attn_processors.items():
58
+ attn_processor.bank_kv.clear()
59
+
60
+ # Initialize model
61
+ device = "cuda"
62
+ base_path = "FLUX.1-dev" # Path to your base model
63
+ pipe = FluxPipeline.from_pretrained(base_path, torch_dtype=torch.bfloat16, device=device)
64
+ transformer = FluxTransformer2DModel.from_pretrained(
65
+ base_path,
66
+ subfolder="transformer",
67
+ torch_dtype=torch.bfloat16,
68
+ device=device
69
+ )
70
+ pipe.transformer = transformer
71
+ pipe.to(device)
72
+
73
+ # Load control models
74
+ lora_path = "./models"
75
+ control_models = {
76
+ "canny": f"{lora_path}/canny.safetensors",
77
+ "depth": f"{lora_path}/depth.safetensors",
78
+ "hedsketch": f"{lora_path}/hedsketch.safetensors",
79
+ "pose": f"{lora_path}/pose.safetensors",
80
+ "seg": f"{lora_path}/seg.safetensors",
81
+ "inpainting": f"{lora_path}/inpainting.safetensors",
82
+ "subject": f"{lora_path}/subject.safetensors",
83
+ }
84
+ ```
85
+
86
+ ### Single Condition Control
87
+
88
+ ```python
89
+ # Single spatial condition control example
90
+ path = control_models["canny"]
91
+ set_single_lora(pipe.transformer, path, lora_weights=[1], cond_size=512)
92
+
93
+ # Generate image
94
+ prompt = "A nice car on the beach"
95
+ spatial_image = "./test_imgs/canny.png"
96
+
97
+ image = pipe(
98
+ prompt,
99
+ height=720,
100
+ width=992,
101
+ guidance_scale=3.5,
102
+ num_inference_steps=25,
103
+ max_sequence_length=512,
104
+ generator=torch.Generator("cpu").manual_seed(5),
105
+ spatial_images=[spatial_image],
106
+ cond_size=512,
107
+ ).images[0]
108
+
109
+ # Clear cache after generation
110
+ clear_cache(pipe.transformer)
111
+ ```
112
+
113
+ ### Multi-Condition Control
114
+
115
+ ```python
116
+ # Multi-condition control example
117
+ paths = [control_models["subject"], control_models["inpainting"]]
118
+ set_multi_lora(pipe.transformer, paths, lora_weights=[[1], [1]], cond_size=512)
119
+
120
+ prompt = "A SKS on the car"
121
+ subject_images = ["./test_imgs/subject_1.png"]
122
+ spatial_images = ["./test_imgs/inpainting.png"]
123
+
124
+ image = pipe(
125
+ prompt,
126
+ height=1024,
127
+ width=1024,
128
+ guidance_scale=3.5,
129
+ num_inference_steps=25,
130
+ max_sequence_length=512,
131
+ generator=torch.Generator("cpu").manual_seed(42),
132
+ subject_images=subject_images,
133
+ spatial_images=spatial_images,
134
+ cond_size=512,
135
+ ).images[0]
136
+
137
+ # Clear cache after generation
138
+ clear_cache(pipe.transformer)
139
+ ```
140
+
141
+
142
+ ## Usage Tips
143
+
144
+ - Clear cache after each generation using `clear_cache(pipe.transformer)`
145
+ - For optimal performance:
146
+ - Start with `guidance_scale=3.5` and adjust based on results
147
+ - Use `num_inference_steps=25` for a good balance of quality and speed
148
+ - When using set_multi_lora api, make sure the subject lora path(subject) is before the spatial lora path(canny, depth, hedsketch, etc.).
149
+
150
+
151
+ ## Disclaimer
152
+ The code of EasyControl is released under [Apache License](https://github.com/Xiaojiu-Z/EasyControl?tab=Apache-2.0-1-ov-file#readme) for both academic and commercial usage. Our released checkpoints are for research purposes only. Users are granted the freedom to create images using this tool, but they are obligated to comply with local laws and utilize it responsibly. The developers will not assume any responsibility for potential misuse by users.
153
+
154
+
155
+ ## Citation
156
+ ```
157
+ @misc{zhang2025easycontroladdingefficientflexible,
158
+ title={EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer},
159
+ author={Yuxuan Zhang and Yirui Yuan and Yiren Song and Haofan Wang and Jiaming Liu},
160
+ year={2025},
161
+ eprint={2503.07027},
162
+ archivePrefix={arXiv},
163
+ primaryClass={cs.CV},
164
+ url={https://arxiv.org/abs/2503.07027},
165
+ }
166
+ ```