nielsr HF staff commited on
Commit
fdf0378
·
verified ·
1 Parent(s): 1fa3701

Add pipeline tag

Browse files

This PR adds the metadata to the model.

Files changed (1) hide show
  1. README.md +167 -166
README.md CHANGED
@@ -1,166 +1,167 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- # Implementation of EasyControl
5
-
6
- EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer
7
-
8
- <a href='https://arxiv.org/pdf/2503.07027'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
9
- <a href="https://github.com/Xiaojiu-z/EasyControl/tree/dev"><img src="https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github&" alt="GitHub"></a>
10
-
11
- > *[Yuxuan Zhang](https://xiaojiu-z.github.io/YuxuanZhang.github.io/), [Yirui Yuan](https://github.com/Reynoldyy), [Yiren Song](https://scholar.google.com.hk/citations?user=L2YS0jgAAAAJ), [Haofan Wang](https://haofanwang.github.io/), [Jiaming Liu](https://scholar.google.com/citations?user=SmL7oMQAAAAJ&hl=en)*
12
- > <br>
13
- > Tiamat AI, ShanghaiTech University, National University of Singapore, Liblib AI
14
-
15
- <img src='assets/teaser.jpg'>
16
-
17
- ## Features
18
- * **Motivation:** The architecture of diffusion models is transitioning from Unet-based to DiT (Diffusion Transformer). However, the DiT ecosystem lacks mature plugin support and faces challenges such as efficiency bottlenecks, conflicts in multi-condition coordination, and insufficient model adaptability, particularly in zero-shot multi-condition combination scenarios where these issues are most pronounced.
19
- * **Contribution:** We propose EasyControl, an efficient and flexible unified conditional DiT framework. By incorporating a lightweight Condition Injection LoRA module, a Position-Aware Training Paradigm, and a combination of Causal Attention mechanisms with KV Cache technology, we significantly enhance model compatibility, generation flexibility, and inference efficiency.
20
- <img src='assets/method.jpg'>
21
-
22
- ## Download
23
-
24
- You can download the model directly from [Hugging Face](https://huggingface.co/EasyControl/EasyControl).
25
- Or download using Python script:
26
-
27
- ```python
28
- from huggingface_hub import hf_hub_download
29
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/canny.safetensors", local_dir="./models")
30
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/depth.safetensors", local_dir="./models")
31
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/hedsketch.safetensors", local_dir="./models")
32
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/inpainting.safetensors", local_dir="./models")
33
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/pose.safetensors", local_dir="./models")
34
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/seg.safetensors", local_dir="./models")
35
- hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/subject.safetensors", local_dir="./models")
36
- ```
37
-
38
- If you cannot access Hugging Face, you can use [hf-mirror](https://hf-mirror.com/) to download the models:
39
- ```python
40
- export HF_ENDPOINT=https://hf-mirror.com
41
- huggingface-cli download --resume-download Xiaojiu-Z/EasyControl --local-dir checkpoints --local-dir-use-symlinks False
42
- ```
43
-
44
- ## Usage
45
- Here's a basic example of using EasyControl. For more details, please follow the instructions in our [__GitHub repository__](https://github.com/Xiaojiu-z/EasyControl):
46
-
47
- ### Model Initialization
48
-
49
- ```python
50
- import torch
51
- from PIL import Image
52
- from src.pipeline import FluxPipeline
53
- from src.transformer_flux import FluxTransformer2DModel
54
- from src.lora_helper import set_single_lora, set_multi_lora
55
-
56
- def clear_cache(transformer):
57
- for name, attn_processor in transformer.attn_processors.items():
58
- attn_processor.bank_kv.clear()
59
-
60
- # Initialize model
61
- device = "cuda"
62
- base_path = "FLUX.1-dev" # Path to your base model
63
- pipe = FluxPipeline.from_pretrained(base_path, torch_dtype=torch.bfloat16, device=device)
64
- transformer = FluxTransformer2DModel.from_pretrained(
65
- base_path,
66
- subfolder="transformer",
67
- torch_dtype=torch.bfloat16,
68
- device=device
69
- )
70
- pipe.transformer = transformer
71
- pipe.to(device)
72
-
73
- # Load control models
74
- lora_path = "./models"
75
- control_models = {
76
- "canny": f"{lora_path}/canny.safetensors",
77
- "depth": f"{lora_path}/depth.safetensors",
78
- "hedsketch": f"{lora_path}/hedsketch.safetensors",
79
- "pose": f"{lora_path}/pose.safetensors",
80
- "seg": f"{lora_path}/seg.safetensors",
81
- "inpainting": f"{lora_path}/inpainting.safetensors",
82
- "subject": f"{lora_path}/subject.safetensors",
83
- }
84
- ```
85
-
86
- ### Single Condition Control
87
-
88
- ```python
89
- # Single spatial condition control example
90
- path = control_models["canny"]
91
- set_single_lora(pipe.transformer, path, lora_weights=[1], cond_size=512)
92
-
93
- # Generate image
94
- prompt = "A nice car on the beach"
95
- spatial_image = "./test_imgs/canny.png"
96
-
97
- image = pipe(
98
- prompt,
99
- height=720,
100
- width=992,
101
- guidance_scale=3.5,
102
- num_inference_steps=25,
103
- max_sequence_length=512,
104
- generator=torch.Generator("cpu").manual_seed(5),
105
- spatial_images=[spatial_image],
106
- cond_size=512,
107
- ).images[0]
108
-
109
- # Clear cache after generation
110
- clear_cache(pipe.transformer)
111
- ```
112
-
113
- ### Multi-Condition Control
114
-
115
- ```python
116
- # Multi-condition control example
117
- paths = [control_models["subject"], control_models["inpainting"]]
118
- set_multi_lora(pipe.transformer, paths, lora_weights=[[1], [1]], cond_size=512)
119
-
120
- prompt = "A SKS on the car"
121
- subject_images = ["./test_imgs/subject_1.png"]
122
- spatial_images = ["./test_imgs/inpainting.png"]
123
-
124
- image = pipe(
125
- prompt,
126
- height=1024,
127
- width=1024,
128
- guidance_scale=3.5,
129
- num_inference_steps=25,
130
- max_sequence_length=512,
131
- generator=torch.Generator("cpu").manual_seed(42),
132
- subject_images=subject_images,
133
- spatial_images=spatial_images,
134
- cond_size=512,
135
- ).images[0]
136
-
137
- # Clear cache after generation
138
- clear_cache(pipe.transformer)
139
- ```
140
-
141
-
142
- ## Usage Tips
143
-
144
- - Clear cache after each generation using `clear_cache(pipe.transformer)`
145
- - For optimal performance:
146
- - Start with `guidance_scale=3.5` and adjust based on results
147
- - Use `num_inference_steps=25` for a good balance of quality and speed
148
- - When using set_multi_lora api, make sure the subject lora path(subject) is before the spatial lora path(canny, depth, hedsketch, etc.).
149
-
150
-
151
- ## Disclaimer
152
- The code of EasyControl is released under [Apache License](https://github.com/Xiaojiu-Z/EasyControl?tab=Apache-2.0-1-ov-file#readme) for both academic and commercial usage. Our released checkpoints are for research purposes only. Users are granted the freedom to create images using this tool, but they are obligated to comply with local laws and utilize it responsibly. The developers will not assume any responsibility for potential misuse by users.
153
-
154
-
155
- ## Citation
156
- ```
157
- @misc{zhang2025easycontroladdingefficientflexible,
158
- title={EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer},
159
- author={Yuxuan Zhang and Yirui Yuan and Yiren Song and Haofan Wang and Jiaming Liu},
160
- year={2025},
161
- eprint={2503.07027},
162
- archivePrefix={arXiv},
163
- primaryClass={cs.CV},
164
- url={https://arxiv.org/abs/2503.07027},
165
- }
166
- ```
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-to-image
4
+ ---
5
+ # Implementation of EasyControl
6
+
7
+ EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer
8
+
9
+ <a href='https://arxiv.org/pdf/2503.07027'><img src='https://img.shields.io/badge/Technique-Report-red'></a>
10
+ <a href="https://github.com/Xiaojiu-z/EasyControl/tree/dev"><img src="https://img.shields.io/badge/GitHub-Code-blue.svg?logo=github&" alt="GitHub"></a>
11
+
12
+ > *[Yuxuan Zhang](https://xiaojiu-z.github.io/YuxuanZhang.github.io/), [Yirui Yuan](https://github.com/Reynoldyy), [Yiren Song](https://scholar.google.com.hk/citations?user=L2YS0jgAAAAJ), [Haofan Wang](https://haofanwang.github.io/), [Jiaming Liu](https://scholar.google.com/citations?user=SmL7oMQAAAAJ&hl=en)*
13
+ > <br>
14
+ > Tiamat AI, ShanghaiTech University, National University of Singapore, Liblib AI
15
+
16
+ <img src='assets/teaser.jpg'>
17
+
18
+ ## Features
19
+ * **Motivation:** The architecture of diffusion models is transitioning from Unet-based to DiT (Diffusion Transformer). However, the DiT ecosystem lacks mature plugin support and faces challenges such as efficiency bottlenecks, conflicts in multi-condition coordination, and insufficient model adaptability, particularly in zero-shot multi-condition combination scenarios where these issues are most pronounced.
20
+ * **Contribution:** We propose EasyControl, an efficient and flexible unified conditional DiT framework. By incorporating a lightweight Condition Injection LoRA module, a Position-Aware Training Paradigm, and a combination of Causal Attention mechanisms with KV Cache technology, we significantly enhance model compatibility, generation flexibility, and inference efficiency.
21
+ <img src='assets/method.jpg'>
22
+
23
+ ## Download
24
+
25
+ You can download the model directly from [Hugging Face](https://huggingface.co/EasyControl/EasyControl).
26
+ Or download using Python script:
27
+
28
+ ```python
29
+ from huggingface_hub import hf_hub_download
30
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/canny.safetensors", local_dir="./models")
31
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/depth.safetensors", local_dir="./models")
32
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/hedsketch.safetensors", local_dir="./models")
33
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/inpainting.safetensors", local_dir="./models")
34
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/pose.safetensors", local_dir="./models")
35
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/seg.safetensors", local_dir="./models")
36
+ hf_hub_download(repo_id="Xiaojiu-Z/EasyControl", filename="models/subject.safetensors", local_dir="./models")
37
+ ```
38
+
39
+ If you cannot access Hugging Face, you can use [hf-mirror](https://hf-mirror.com/) to download the models:
40
+ ```python
41
+ export HF_ENDPOINT=https://hf-mirror.com
42
+ huggingface-cli download --resume-download Xiaojiu-Z/EasyControl --local-dir checkpoints --local-dir-use-symlinks False
43
+ ```
44
+
45
+ ## Usage
46
+ Here's a basic example of using EasyControl. For more details, please follow the instructions in our [__GitHub repository__](https://github.com/Xiaojiu-z/EasyControl):
47
+
48
+ ### Model Initialization
49
+
50
+ ```python
51
+ import torch
52
+ from PIL import Image
53
+ from src.pipeline import FluxPipeline
54
+ from src.transformer_flux import FluxTransformer2DModel
55
+ from src.lora_helper import set_single_lora, set_multi_lora
56
+
57
+ def clear_cache(transformer):
58
+ for name, attn_processor in transformer.attn_processors.items():
59
+ attn_processor.bank_kv.clear()
60
+
61
+ # Initialize model
62
+ device = "cuda"
63
+ base_path = "FLUX.1-dev" # Path to your base model
64
+ pipe = FluxPipeline.from_pretrained(base_path, torch_dtype=torch.bfloat16, device=device)
65
+ transformer = FluxTransformer2DModel.from_pretrained(
66
+ base_path,
67
+ subfolder="transformer",
68
+ torch_dtype=torch.bfloat16,
69
+ device=device
70
+ )
71
+ pipe.transformer = transformer
72
+ pipe.to(device)
73
+
74
+ # Load control models
75
+ lora_path = "./models"
76
+ control_models = {
77
+ "canny": f"{lora_path}/canny.safetensors",
78
+ "depth": f"{lora_path}/depth.safetensors",
79
+ "hedsketch": f"{lora_path}/hedsketch.safetensors",
80
+ "pose": f"{lora_path}/pose.safetensors",
81
+ "seg": f"{lora_path}/seg.safetensors",
82
+ "inpainting": f"{lora_path}/inpainting.safetensors",
83
+ "subject": f"{lora_path}/subject.safetensors",
84
+ }
85
+ ```
86
+
87
+ ### Single Condition Control
88
+
89
+ ```python
90
+ # Single spatial condition control example
91
+ path = control_models["canny"]
92
+ set_single_lora(pipe.transformer, path, lora_weights=[1], cond_size=512)
93
+
94
+ # Generate image
95
+ prompt = "A nice car on the beach"
96
+ spatial_image = "./test_imgs/canny.png"
97
+
98
+ image = pipe(
99
+ prompt,
100
+ height=720,
101
+ width=992,
102
+ guidance_scale=3.5,
103
+ num_inference_steps=25,
104
+ max_sequence_length=512,
105
+ generator=torch.Generator("cpu").manual_seed(5),
106
+ spatial_images=[spatial_image],
107
+ cond_size=512,
108
+ ).images[0]
109
+
110
+ # Clear cache after generation
111
+ clear_cache(pipe.transformer)
112
+ ```
113
+
114
+ ### Multi-Condition Control
115
+
116
+ ```python
117
+ # Multi-condition control example
118
+ paths = [control_models["subject"], control_models["inpainting"]]
119
+ set_multi_lora(pipe.transformer, paths, lora_weights=[[1], [1]], cond_size=512)
120
+
121
+ prompt = "A SKS on the car"
122
+ subject_images = ["./test_imgs/subject_1.png"]
123
+ spatial_images = ["./test_imgs/inpainting.png"]
124
+
125
+ image = pipe(
126
+ prompt,
127
+ height=1024,
128
+ width=1024,
129
+ guidance_scale=3.5,
130
+ num_inference_steps=25,
131
+ max_sequence_length=512,
132
+ generator=torch.Generator("cpu").manual_seed(42),
133
+ subject_images=subject_images,
134
+ spatial_images=spatial_images,
135
+ cond_size=512,
136
+ ).images[0]
137
+
138
+ # Clear cache after generation
139
+ clear_cache(pipe.transformer)
140
+ ```
141
+
142
+
143
+ ## Usage Tips
144
+
145
+ - Clear cache after each generation using `clear_cache(pipe.transformer)`
146
+ - For optimal performance:
147
+ - Start with `guidance_scale=3.5` and adjust based on results
148
+ - Use `num_inference_steps=25` for a good balance of quality and speed
149
+ - When using set_multi_lora api, make sure the subject lora path(subject) is before the spatial lora path(canny, depth, hedsketch, etc.).
150
+
151
+
152
+ ## Disclaimer
153
+ The code of EasyControl is released under [Apache License](https://github.com/Xiaojiu-Z/EasyControl?tab=Apache-2.0-1-ov-file#readme) for both academic and commercial usage. Our released checkpoints are for research purposes only. Users are granted the freedom to create images using this tool, but they are obligated to comply with local laws and utilize it responsibly. The developers will not assume any responsibility for potential misuse by users.
154
+
155
+
156
+ ## Citation
157
+ ```
158
+ @misc{zhang2025easycontroladdingefficientflexible,
159
+ title={EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer},
160
+ author={Yuxuan Zhang and Yirui Yuan and Yiren Song and Haofan Wang and Jiaming Liu},
161
+ year={2025},
162
+ eprint={2503.07027},
163
+ archivePrefix={arXiv},
164
+ primaryClass={cs.CV},
165
+ url={https://arxiv.org/abs/2503.07027},
166
+ }
167
+ ```