Diffusers
TalHach61 commited on
Commit
ebe67e3
·
verified ·
1 Parent(s): 81a47e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -5
README.md CHANGED
@@ -1,5 +1,148 @@
1
- ---
2
- license: other
3
- license_name: bria-legal-lobby
4
- license_link: https://bria.ai/legal-lobby
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: bria-legal-lobby
4
+ license_link: https://bria.ai/legal-lobby
5
+ ---
6
+
7
+
8
+ # BRIA 3.0 ControlNet Union Model Card
9
+
10
+
11
+ BRIA-3.0 ControlNet-Union, trained on the foundation of [BRIA-3.0 Text-to-Image](https://huggingface.co/briaai/BRIA-3.0-TOUCAN)
12
+
13
+ [CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-2.3-ControlNet-Pose)
14
+
15
+ [BRIA 3.0](https://huggingface.co/briaai/BRIA-3.0-TOUCAN) was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
16
+
17
+ Join our [Discord community](https://discord.gg/Nxe9YW9zHS) for more information, tutorials, tools, and to connect with other users!
18
+
19
+ ![controlnet_pose_showoff.png](https://huggingface.co/briaai/BRIA-2.3-ControlNet-Pose/resolve/main/controlnet_pose_showoff.png)
20
+
21
+
22
+ ### Model Description
23
+ - **Developed by:** BRIA AI
24
+ - **Model type:** [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) for Latent diffusion
25
+ - **License:** [bria-3.0](https://bria.ai/bria-huggingface-model-license-agreement/)
26
+
27
+ - **Model Description:** ControlNet Union for BRIA 3.0 Text-to-Image model. The model generates images guided by text and a conditioned image.
28
+ - **Resources for more information:** [BRIA AI](https://bria.ai/)
29
+
30
+
31
+ ### Get Access
32
+ BRIA 3.0 ControlNet-Union requires access to BRIA 3.0 Text-to-Image. For more information, [click here](https://huggingface.co/briaai/BRIA-3.0-TOUCAN).
33
+
34
+
35
+ ## Control Mode
36
+
37
+ | Control Mode | Description |
38
+ |:------------:|:-----------:|
39
+ |0|depth
40
+ |1|canny
41
+ |2|colorgrid
42
+ |3|recolor
43
+ |4|tlie
44
+ |5|pose
45
+
46
+
47
+ # Inference
48
+ ```python
49
+ pip install diffusers==0.30.2, hf_hub_download
50
+ ```
51
+
52
+ ```python
53
+ from huggingface_hub import hf_hub_download
54
+ import os
55
+
56
+ try:
57
+ local_dir = os.path.dirname(__file__)
58
+ except:
59
+ local_dir = '.'
60
+
61
+ hf_hub_download(repo_id="briaai/BRIA-3.0-TOUCAN", filename='pipeline_bria.py', local_dir=local_dir)
62
+ hf_hub_download(repo_id="briaai/BRIA-3.0-TOUCAN", filename='transformer_bria.py', local_dir=local_dir)
63
+ hf_hub_download(repo_id="briaai/BRIA-3.0-TOUCAN", filename='bria_utils.py', local_dir=local_dir)
64
+ hf_hub_download(repo_id="briaai/BRIA-3.0-ControlNet-Union", filename='pipeline_bria_controlnet.py', local_dir=local_dir)
65
+ hf_hub_download(repo_id="briaai/BRIA-3.0-ControlNet-Union", filename='controlnet_bria.py', local_dir=local_dir)
66
+
67
+
68
+ import torch
69
+ from diffusers.utils import load_image
70
+ from controlnet_bria import BriaControlNetModel, BriaMultiControlNetModel
71
+ from pipeline_bria_controlnet import BriaControlNetPipeline
72
+
73
+ <!-- from diffusers import FluxControlNetPipeline, FluxControlNetModel -->
74
+
75
+ base_model = 'briaai/BRIA-3.0-TOUCAN'
76
+ controlnet_model = 'briaai/BRIA-3.0-ControlNet-Union'
77
+
78
+ controlnet = BriaControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16)
79
+ pipe = BriaControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
80
+ pipe.to("cuda")
81
+
82
+ control_image_canny = load_image("https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union-alpha/resolve/main/images/canny.jpg")
83
+ controlnet_conditioning_scale = 0.5
84
+ control_mode = 0
85
+
86
+ width, height = control_image.size
87
+
88
+ prompt = 'A bohemian-style female travel blogger with sun-kissed skin and messy beach waves.'
89
+
90
+ image = pipe(
91
+ prompt,
92
+ control_image=control_image,
93
+ control_mode=control_mode,
94
+ width=width,
95
+ height=height,
96
+ controlnet_conditioning_scale=controlnet_conditioning_scale,
97
+ num_inference_steps=24,
98
+ guidance_scale=3.5,
99
+ ).images[0]
100
+ image.save("image.jpg")
101
+ ```
102
+
103
+ # Multi-Controls Inference
104
+ ```python
105
+ import torch
106
+ from diffusers.utils import load_image
107
+ from diffusers import FluxControlNetPipeline, FluxControlNetModel, FluxMultiControlNetModel
108
+
109
+ base_model = 'black-forest-labs/FLUX.1-dev'
110
+ controlnet_model_union = 'InstantX/FLUX.1-dev-Controlnet-Union'
111
+
112
+ controlnet_union = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.bfloat16)
113
+ controlnet = FluxMultiControlNetModel([controlnet_union]) # we always recommend loading via FluxMultiControlNetModel
114
+
115
+ pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.bfloat16)
116
+ pipe.to("cuda")
117
+
118
+ prompt = 'A bohemian-style female travel blogger with sun-kissed skin and messy beach waves.'
119
+ control_image_depth = load_image("https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/resolve/main/images/depth.jpg")
120
+ control_mode_depth = 2
121
+
122
+ control_image_canny = load_image("https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union/resolve/main/images/canny.jpg")
123
+ control_mode_canny = 0
124
+
125
+ width, height = control_image.size
126
+
127
+ image = pipe(
128
+ prompt,
129
+ control_image=[control_image_depth, control_image_canny],
130
+ control_mode=[control_mode_depth, control_mode_canny],
131
+ width=width,
132
+ height=height,
133
+ controlnet_conditioning_scale=[0.2, 0.4],
134
+ num_inference_steps=24,
135
+ guidance_scale=3.5,
136
+ generator=torch.manual_seed(42),
137
+ ).images[0]
138
+
139
+ ```
140
+
141
+ # Resources
142
+ - [InstantX/FLUX.1-dev-Controlnet-Canny](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny)
143
+ - [InstantX/FLUX.1-dev-Controlnet-Union](https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Union)
144
+ - [Shakker-Labs/FLUX.1-dev-ControlNet-Depth](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth)
145
+ - [Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro](https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro)
146
+
147
+ # Acknowledgements
148
+ Thanks [zzzzzero](https://github.com/zzzzzero) for help us pointing out some bugs in the training.