Datasets:

ArXiv:
summerA2024 commited on
Commit
647d7c6
·
verified ·
1 Parent(s): 70d69fb

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stable Diffusion text-to-image fine-tuning
2
+
3
+ The `train_text_to_image.py` script shows how to fine-tune stable diffusion model on your own dataset.
4
+
5
+ ___Note___:
6
+
7
+ ___This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparameters to get the best result on your dataset.___
8
+
9
+
10
+ ## Running locally with PyTorch
11
+ ### Installing the dependencies
12
+
13
+ Before running the scripts, make sure to install the library's training dependencies:
14
+
15
+ **Important**
16
+
17
+ To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
18
+ ```bash
19
+ git clone https://github.com/huggingface/diffusers
20
+ cd diffusers
21
+ pip install .
22
+ ```
23
+
24
+ Then cd in the example folder and run
25
+ ```bash
26
+ pip install -r requirements.txt
27
+ ```
28
+
29
+ And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
30
+
31
+ ```bash
32
+ accelerate config
33
+ ```
34
+
35
+ Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
36
+
37
+ ### Naruto example
38
+
39
+ You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-4`, so you'll need to visit [its card](https://huggingface.co/CompVis/stable-diffusion-v1-4), read the license and tick the checkbox if you agree.
40
+
41
+ You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
42
+
43
+ Run the following command to authenticate your token
44
+
45
+ ```bash
46
+ huggingface-cli login
47
+ ```
48
+
49
+ If you have already cloned the repo, then you won't need to go through these steps.
50
+
51
+ <br>
52
+
53
+ #### Hardware
54
+ With `gradient_checkpointing` and `mixed_precision` it should be possible to fine tune the model on a single 24GB GPU. For higher `batch_size` and faster training it's better to use GPUs with >30GB memory.
55
+
56
+ **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
57
+ <!-- accelerate_snippet_start -->
58
+ ```bash
59
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
60
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
61
+
62
+ accelerate launch --mixed_precision="fp16" train_text_to_image.py \
63
+ --pretrained_model_name_or_path=$MODEL_NAME \
64
+ --dataset_name=$DATASET_NAME \
65
+ --use_ema \
66
+ --resolution=512 --center_crop --random_flip \
67
+ --train_batch_size=1 \
68
+ --gradient_accumulation_steps=4 \
69
+ --gradient_checkpointing \
70
+ --max_train_steps=15000 \
71
+ --learning_rate=1e-05 \
72
+ --max_grad_norm=1 \
73
+ --lr_scheduler="constant" --lr_warmup_steps=0 \
74
+ --output_dir="sd-naruto-model"
75
+ ```
76
+ <!-- accelerate_snippet_end -->
77
+
78
+
79
+ To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata).
80
+ If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
81
+
82
+ ```bash
83
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
84
+ export TRAIN_DIR="path_to_your_dataset"
85
+
86
+ accelerate launch --mixed_precision="fp16" train_text_to_image.py \
87
+ --pretrained_model_name_or_path=$MODEL_NAME \
88
+ --train_data_dir=$TRAIN_DIR \
89
+ --use_ema \
90
+ --resolution=512 --center_crop --random_flip \
91
+ --train_batch_size=1 \
92
+ --gradient_accumulation_steps=4 \
93
+ --gradient_checkpointing \
94
+ --max_train_steps=15000 \
95
+ --learning_rate=1e-05 \
96
+ --max_grad_norm=1 \
97
+ --lr_scheduler="constant" --lr_warmup_steps=0 \
98
+ --output_dir="sd-naruto-model"
99
+ ```
100
+
101
+
102
+ Once the training is finished the model will be saved in the `output_dir` specified in the command. In this example it's `sd-naruto-model`. To load the fine-tuned model for inference just pass that path to `StableDiffusionPipeline`
103
+
104
+ ```python
105
+ import torch
106
+ from diffusers import StableDiffusionPipeline
107
+
108
+ model_path = "path_to_saved_model"
109
+ pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
110
+ pipe.to("cuda")
111
+
112
+ image = pipe(prompt="yoda").images[0]
113
+ image.save("yoda-naruto.png")
114
+ ```
115
+
116
+ Checkpoints only save the unet, so to run inference from a checkpoint, just load the unet
117
+
118
+ ```python
119
+ import torch
120
+ from diffusers import StableDiffusionPipeline, UNet2DConditionModel
121
+
122
+ model_path = "path_to_saved_model"
123
+ unet = UNet2DConditionModel.from_pretrained(model_path + "/checkpoint-<N>/unet", torch_dtype=torch.float16)
124
+
125
+ pipe = StableDiffusionPipeline.from_pretrained("<initial model>", unet=unet, torch_dtype=torch.float16)
126
+ pipe.to("cuda")
127
+
128
+ image = pipe(prompt="yoda").images[0]
129
+ image.save("yoda-naruto.png")
130
+ ```
131
+
132
+ #### Training with multiple GPUs
133
+
134
+ `accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch)
135
+ for running distributed training with `accelerate`. Here is an example command:
136
+
137
+ ```bash
138
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
139
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
140
+
141
+ accelerate launch --mixed_precision="fp16" --multi_gpu train_text_to_image.py \
142
+ --pretrained_model_name_or_path=$MODEL_NAME \
143
+ --dataset_name=$DATASET_NAME \
144
+ --use_ema \
145
+ --resolution=512 --center_crop --random_flip \
146
+ --train_batch_size=1 \
147
+ --gradient_accumulation_steps=4 \
148
+ --gradient_checkpointing \
149
+ --max_train_steps=15000 \
150
+ --learning_rate=1e-05 \
151
+ --max_grad_norm=1 \
152
+ --lr_scheduler="constant" --lr_warmup_steps=0 \
153
+ --output_dir="sd-naruto-model"
154
+ ```
155
+
156
+
157
+ #### Training with Min-SNR weighting
158
+
159
+ We support training with the Min-SNR weighting strategy proposed in [Efficient Diffusion Training via Min-SNR Weighting Strategy](https://arxiv.org/abs/2303.09556) which helps to achieve faster convergence
160
+ by rebalancing the loss. In order to use it, one needs to set the `--snr_gamma` argument. The recommended
161
+ value when using it is 5.0.
162
+
163
+ You can find [this project on Weights and Biases](https://wandb.ai/sayakpaul/text2image-finetune-minsnr) that compares the loss surfaces of the following setups:
164
+
165
+ * Training without the Min-SNR weighting strategy
166
+ * Training with the Min-SNR weighting strategy (`snr_gamma` set to 5.0)
167
+ * Training with the Min-SNR weighting strategy (`snr_gamma` set to 1.0)
168
+
169
+ For our small Narutos dataset, the effects of Min-SNR weighting strategy might not appear to be pronounced, but for larger datasets, we believe the effects will be more pronounced.
170
+
171
+ Also, note that in this example, we either predict `epsilon` (i.e., the noise) or the `v_prediction`. For both of these cases, the formulation of the Min-SNR weighting strategy that we have used holds.
172
+
173
+
174
+ #### Training with EMA weights
175
+
176
+ Through the `EMAModel` class, we support a convenient method of tracking an exponential moving average of model parameters. This helps to smooth out noise in model parameter updates and generally improves model performance. If enabled with the `--use_ema` argument, the final model checkpoint that is saved at the end of training will use the EMA weights.
177
+
178
+ EMA weights require an additional full-precision copy of the model parameters to be stored in memory, but otherwise have very little performance overhead. `--foreach_ema` can be used to further reduce the overhead. If you are short on VRAM and still want to use EMA weights, you can store them in CPU RAM by using the `--offload_ema` argument. This will keep the EMA weights in pinned CPU memory during the training step. Then, once every model parameter update, it will transfer the EMA weights back to the GPU which can then update the parameters on the GPU, before sending them back to the CPU. Both of these transfers are set up as non-blocking, so CUDA devices should be able to overlap this transfer with other computations. With sufficient bandwidth between the host and device and a sufficiently long gap between model parameter updates, storing EMA weights in CPU RAM should have no additional performance overhead, as long as no other calls force synchronization.
179
+
180
+ #### Training with DREAM
181
+
182
+ We support training epsilon (noise) prediction models using the [DREAM (Diffusion Rectification and Estimation-Adaptive Models) strategy](https://arxiv.org/abs/2312.00210). DREAM claims to increase model fidelity for the performance cost of an extra grad-less unet `forward` step in the training loop. You can turn on DREAM training by using the `--dream_training` argument. The `--dream_detail_preservation` argument controls the detail preservation variable p and is the default of 1 from the paper.
183
+
184
+
185
+
186
+ ## Training with LoRA
187
+
188
+ Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*.
189
+
190
+ In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages:
191
+
192
+ - Previous pretrained weights are kept frozen so that model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114).
193
+ - Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable.
194
+ - LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter.
195
+
196
+ [cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
197
+
198
+ With LoRA, it's possible to fine-tune Stable Diffusion on a custom image-caption pair dataset
199
+ on consumer GPUs like Tesla T4, Tesla V100.
200
+
201
+ ### Training
202
+
203
+ First, you need to set up your development environment as is explained in the [installation section](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables. Here, we will use [Stable Diffusion v1-4](https://hf.co/CompVis/stable-diffusion-v1-4) and the [Narutos dataset](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions).
204
+
205
+ **___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___**
206
+
207
+ **___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [Weights and Biases](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training to automatically log images.___**
208
+
209
+ ```bash
210
+ export MODEL_NAME="CompVis/stable-diffusion-v1-4"
211
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
212
+ ```
213
+
214
+ For this example we want to directly store the trained LoRA embeddings on the Hub, so
215
+ we need to be logged in and add the `--push_to_hub` flag.
216
+
217
+ ```bash
218
+ huggingface-cli login
219
+ ```
220
+
221
+ Now we can start training!
222
+
223
+ ```bash
224
+ accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
225
+ --pretrained_model_name_or_path=$MODEL_NAME \
226
+ --dataset_name=$DATASET_NAME --caption_column="text" \
227
+ --resolution=512 --random_flip \
228
+ --train_batch_size=1 \
229
+ --num_train_epochs=100 --checkpointing_steps=5000 \
230
+ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
231
+ --seed=42 \
232
+ --output_dir="sd-naruto-model-lora" \
233
+ --validation_prompt="cute dragon creature" --report_to="wandb"
234
+ ```
235
+
236
+ The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases.
237
+
238
+ **___Note: When using LoRA we can use a much higher learning rate compared to non-LoRA fine-tuning. Here we use *1e-4* instead of the usual *1e-5*. Also, by using LoRA, it's possible to run `train_text_to_image_lora.py` in consumer GPUs like T4 or V100.___**
239
+
240
+ The final LoRA embedding weights have been uploaded to [sayakpaul/sd-model-finetuned-lora-t4](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4). **___Note: [The final weights](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/pytorch_lora_weights.bin) are only 3 MB in size, which is orders of magnitudes smaller than the original model.___**
241
+
242
+ You can check some inference samples that were logged during the course of the fine-tuning process [here](https://wandb.ai/sayakpaul/text2image-fine-tune/runs/q4lc0xsw).
243
+
244
+ ### Inference
245
+
246
+ Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline` after loading the trained LoRA weights. You
247
+ need to pass the `output_dir` for loading the LoRA weights which, in this case, is `sd-naruto-model-lora`.
248
+
249
+ ```python
250
+ from diffusers import StableDiffusionPipeline
251
+ import torch
252
+
253
+ model_path = "sayakpaul/sd-model-finetuned-lora-t4"
254
+ pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
255
+ pipe.unet.load_attn_procs(model_path)
256
+ pipe.to("cuda")
257
+
258
+ prompt = "A naruto with green eyes and red legs."
259
+ image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
260
+ image.save("naruto.png")
261
+ ```
262
+
263
+ If you are loading the LoRA parameters from the Hub and if the Hub repository has
264
+ a `base_model` tag (such as [this](https://huggingface.co/sayakpaul/sd-model-finetuned-lora-t4/blob/main/README.md?code=true#L4)), then
265
+ you can do:
266
+
267
+ ```py
268
+ from huggingface_hub.repocard import RepoCard
269
+
270
+ lora_model_id = "sayakpaul/sd-model-finetuned-lora-t4"
271
+ card = RepoCard.load(lora_model_id)
272
+ base_model_id = card.data.to_dict()["base_model"]
273
+
274
+ pipe = StableDiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)
275
+ ...
276
+ ```
277
+
278
+ ## Training with Flax/JAX
279
+
280
+ For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script.
281
+
282
+ **___Note: The flax example doesn't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards or TPU v3.___**
283
+
284
+
285
+ Before running the scripts, make sure to install the library's training dependencies:
286
+
287
+ ```bash
288
+ pip install -U -r requirements_flax.txt
289
+ ```
290
+
291
+ ```bash
292
+ export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
293
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
294
+
295
+ python train_text_to_image_flax.py \
296
+ --pretrained_model_name_or_path=$MODEL_NAME \
297
+ --dataset_name=$DATASET_NAME \
298
+ --resolution=512 --center_crop --random_flip \
299
+ --train_batch_size=1 \
300
+ --mixed_precision="fp16" \
301
+ --max_train_steps=15000 \
302
+ --learning_rate=1e-05 \
303
+ --max_grad_norm=1 \
304
+ --output_dir="sd-naruto-model"
305
+ ```
306
+
307
+ To run on your own training files prepare the dataset according to the format required by `datasets`, you can find the instructions for how to do that in this [document](https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder-with-metadata).
308
+ If you wish to use custom loading logic, you should modify the script, we have left pointers for that in the training script.
309
+
310
+ ```bash
311
+ export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
312
+ export TRAIN_DIR="path_to_your_dataset"
313
+
314
+ python train_text_to_image_flax.py \
315
+ --pretrained_model_name_or_path=$MODEL_NAME \
316
+ --train_data_dir=$TRAIN_DIR \
317
+ --resolution=512 --center_crop --random_flip \
318
+ --train_batch_size=1 \
319
+ --mixed_precision="fp16" \
320
+ --max_train_steps=15000 \
321
+ --learning_rate=1e-05 \
322
+ --max_grad_norm=1 \
323
+ --output_dir="sd-naruto-model"
324
+ ```
325
+
326
+ ### Training with xFormers:
327
+
328
+ You can enable memory efficient attention by [installing xFormers](https://huggingface.co/docs/diffusers/main/en/optimization/xformers) and passing the `--enable_xformers_memory_efficient_attention` argument to the script.
329
+
330
+ xFormers training is not available for Flax/JAX.
331
+
332
+ **Note**:
333
+
334
+ According to [this issue](https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212), xFormers `v0.0.16` cannot be used for training in some GPUs. If you observe that problem, please install a development version as indicated in that comment.
335
+
336
+ ## Stable Diffusion XL
337
+
338
+ * We support fine-tuning the UNet shipped in [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) via the `train_text_to_image_sdxl.py` script. Please refer to the docs [here](./README_sdxl.md).
339
+ * We also support fine-tuning of the UNet and Text Encoder shipped in [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) with LoRA via the `train_text_to_image_lora_sdxl.py` script. Please refer to the docs [here](./README_sdxl.md).
README_sdxl.md ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stable Diffusion XL text-to-image fine-tuning
2
+
3
+ The `train_text_to_image_sdxl.py` script shows how to fine-tune Stable Diffusion XL (SDXL) on your own dataset.
4
+
5
+ 🚨 This script is experimental. The script fine-tunes the whole model and often times the model overfits and runs into issues like catastrophic forgetting. It's recommended to try different hyperparameters to get the best result on your dataset. 🚨
6
+
7
+ ## Running locally with PyTorch
8
+
9
+ ### Installing the dependencies
10
+
11
+ Before running the scripts, make sure to install the library's training dependencies:
12
+
13
+ **Important**
14
+
15
+ To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
16
+
17
+ ```bash
18
+ git clone https://github.com/huggingface/diffusers
19
+ cd diffusers
20
+ pip install -e .
21
+ ```
22
+
23
+ Then cd in the `examples/text_to_image` folder and run
24
+ ```bash
25
+ pip install -r requirements_sdxl.txt
26
+ ```
27
+
28
+ And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
29
+
30
+ ```bash
31
+ accelerate config
32
+ ```
33
+
34
+ Or for a default accelerate configuration without answering questions about your environment
35
+
36
+ ```bash
37
+ accelerate config default
38
+ ```
39
+
40
+ Or if your environment doesn't support an interactive shell (e.g., a notebook)
41
+
42
+ ```python
43
+ from accelerate.utils import write_basic_config
44
+ write_basic_config()
45
+ ```
46
+
47
+ When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
48
+ Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
49
+
50
+ ### Training
51
+
52
+ ```bash
53
+ export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
54
+ export VAE_NAME="madebyollin/sdxl-vae-fp16-fix"
55
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
56
+
57
+ accelerate launch train_text_to_image_sdxl.py \
58
+ --pretrained_model_name_or_path=$MODEL_NAME \
59
+ --pretrained_vae_model_name_or_path=$VAE_NAME \
60
+ --dataset_name=$DATASET_NAME \
61
+ --enable_xformers_memory_efficient_attention \
62
+ --resolution=512 --center_crop --random_flip \
63
+ --proportion_empty_prompts=0.2 \
64
+ --train_batch_size=1 \
65
+ --gradient_accumulation_steps=4 --gradient_checkpointing \
66
+ --max_train_steps=10000 \
67
+ --use_8bit_adam \
68
+ --learning_rate=1e-06 --lr_scheduler="constant" --lr_warmup_steps=0 \
69
+ --mixed_precision="fp16" \
70
+ --report_to="wandb" \
71
+ --validation_prompt="a cute Sundar Pichai creature" --validation_epochs 5 \
72
+ --checkpointing_steps=5000 \
73
+ --output_dir="sdxl-naruto-model" \
74
+ --push_to_hub
75
+ ```
76
+
77
+ **Notes**:
78
+
79
+ * The `train_text_to_image_sdxl.py` script pre-computes text embeddings and the VAE encodings and keeps them in memory. While for smaller datasets like [`lambdalabs/naruto-blip-captions`](https://hf.co/datasets/lambdalabs/naruto-blip-captions), it might not be a problem, it can definitely lead to memory problems when the script is used on a larger dataset. For those purposes, you would want to serialize these pre-computed representations to disk separately and load them during the fine-tuning process. Refer to [this PR](https://github.com/huggingface/diffusers/pull/4505) for a more in-depth discussion.
80
+ * The training script is compute-intensive and may not run on a consumer GPU like Tesla T4.
81
+ * The training command shown above performs intermediate quality validation in between the training epochs and logs the results to Weights and Biases. `--report_to`, `--validation_prompt`, and `--validation_epochs` are the relevant CLI arguments here.
82
+ * SDXL's VAE is known to suffer from numerical instability issues. This is why we also expose a CLI argument namely `--pretrained_vae_model_name_or_path` that lets you specify the location of a better VAE (such as [this one](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)).
83
+
84
+ ### Inference
85
+
86
+ ```python
87
+ from diffusers import DiffusionPipeline
88
+ import torch
89
+
90
+ model_path = "you-model-id-goes-here" # <-- change this
91
+ pipe = DiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
92
+ pipe.to("cuda")
93
+
94
+ prompt = "A naruto with green eyes and red legs."
95
+ image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
96
+ image.save("naruto.png")
97
+ ```
98
+
99
+ ### Inference in Pytorch XLA
100
+ ```python
101
+ from diffusers import DiffusionPipeline
102
+ import torch
103
+ import torch_xla.core.xla_model as xm
104
+
105
+ model_id = "stabilityai/stable-diffusion-xl-base-1.0"
106
+ pipe = DiffusionPipeline.from_pretrained(model_id)
107
+
108
+ device = xm.xla_device()
109
+ pipe.to(device)
110
+
111
+ prompt = "A naruto with green eyes and red legs."
112
+ start = time()
113
+ image = pipe(prompt, num_inference_steps=inference_steps).images[0]
114
+ print(f'Compilation time is {time()-start} sec')
115
+ image.save("naruto.png")
116
+
117
+ start = time()
118
+ image = pipe(prompt, num_inference_steps=inference_steps).images[0]
119
+ print(f'Inference time is {time()-start} sec after compilation')
120
+ ```
121
+
122
+ Note: There is a warmup step in PyTorch XLA. This takes longer because of
123
+ compilation and optimization. To see the real benefits of Pytorch XLA and
124
+ speedup, we need to call the pipe again on the input with the same length
125
+ as the original prompt to reuse the optimized graph and get the performance
126
+ boost.
127
+
128
+ ## LoRA training example for Stable Diffusion XL (SDXL)
129
+
130
+ Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen*.
131
+
132
+ In a nutshell, LoRA allows adapting pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages:
133
+
134
+ - Previous pretrained weights are kept frozen so that model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114).
135
+ - Rank-decomposition matrices have significantly fewer parameters than original model, which means that trained LoRA weights are easily portable.
136
+ - LoRA attention layers allow to control to which extent the model is adapted toward new training images via a `scale` parameter.
137
+
138
+ [cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository.
139
+
140
+ With LoRA, it's possible to fine-tune Stable Diffusion on a custom image-caption pair dataset
141
+ on consumer GPUs like Tesla T4, Tesla V100.
142
+
143
+ ### Training
144
+
145
+ First, you need to set up your development environment as is explained in the [installation section](#installing-the-dependencies). Make sure to set the `MODEL_NAME` and `DATASET_NAME` environment variables and, optionally, the `VAE_NAME` variable. Here, we will use [Stable Diffusion XL 1.0-base](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and the [Narutos dataset](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions).
146
+
147
+ **___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [Weights and Biases](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training to automatically log images.___**
148
+
149
+ ```bash
150
+ export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
151
+ export VAE_NAME="madebyollin/sdxl-vae-fp16-fix"
152
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
153
+ ```
154
+
155
+ For this example we want to directly store the trained LoRA embeddings on the Hub, so
156
+ we need to be logged in and add the `--push_to_hub` flag.
157
+
158
+ ```bash
159
+ huggingface-cli login
160
+ ```
161
+
162
+ Now we can start training!
163
+
164
+ ```bash
165
+ accelerate launch train_text_to_image_lora_sdxl.py \
166
+ --pretrained_model_name_or_path=$MODEL_NAME \
167
+ --pretrained_vae_model_name_or_path=$VAE_NAME \
168
+ --dataset_name=$DATASET_NAME --caption_column="text" \
169
+ --resolution=1024 --random_flip \
170
+ --train_batch_size=1 \
171
+ --num_train_epochs=2 --checkpointing_steps=500 \
172
+ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
173
+ --mixed_precision="fp16" \
174
+ --seed=42 \
175
+ --output_dir="sd-naruto-model-lora-sdxl" \
176
+ --validation_prompt="cute dragon creature" --report_to="wandb" \
177
+ --push_to_hub
178
+ ```
179
+
180
+ The above command will also run inference as fine-tuning progresses and log the results to Weights and Biases.
181
+
182
+ **Notes**:
183
+
184
+ * SDXL's VAE is known to suffer from numerical instability issues. This is why we also expose a CLI argument namely `--pretrained_vae_model_name_or_path` that lets you specify the location of a better VAE (such as [this one](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix)).
185
+
186
+
187
+ ### Using DeepSpeed
188
+ Using DeepSpeed one can reduce the consumption of GPU memory, enabling the training of models on GPUs with smaller memory sizes. DeepSpeed is capable of offloading model parameters to the machine's memory, or it can distribute parameters, gradients, and optimizer states across multiple GPUs. This allows for the training of larger models under the same hardware configuration.
189
+
190
+ First, you need to use the `accelerate config` command to choose to use DeepSpeed, or manually use the accelerate config file to set up DeepSpeed.
191
+
192
+ Here is an example of a config file for using DeepSpeed. For more detailed explanations of the configuration, you can refer to this [link](https://huggingface.co/docs/accelerate/usage_guides/deepspeed).
193
+ ```yaml
194
+ compute_environment: LOCAL_MACHINE
195
+ debug: true
196
+ deepspeed_config:
197
+ gradient_accumulation_steps: 1
198
+ gradient_clipping: 1.0
199
+ offload_optimizer_device: none
200
+ offload_param_device: none
201
+ zero3_init_flag: false
202
+ zero_stage: 2
203
+ distributed_type: DEEPSPEED
204
+ downcast_bf16: 'no'
205
+ machine_rank: 0
206
+ main_training_function: main
207
+ mixed_precision: fp16
208
+ num_machines: 1
209
+ num_processes: 1
210
+ rdzv_backend: static
211
+ same_network: true
212
+ tpu_env: []
213
+ tpu_use_cluster: false
214
+ tpu_use_sudo: false
215
+ use_cpu: false
216
+ ```
217
+ You need to save the mentioned configuration as an `accelerate_config.yaml` file. Then, you need to input the path of your `accelerate_config.yaml` file into the `ACCELERATE_CONFIG_FILE` parameter. This way you can use DeepSpeed to train your SDXL model in LoRA. Additionally, you can use DeepSpeed to train other SD models in this way.
218
+
219
+ ```shell
220
+ export MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"
221
+ export VAE_NAME="madebyollin/sdxl-vae-fp16-fix"
222
+ export DATASET_NAME="lambdalabs/naruto-blip-captions"
223
+ export ACCELERATE_CONFIG_FILE="your accelerate_config.yaml"
224
+
225
+ accelerate launch --config_file $ACCELERATE_CONFIG_FILE train_text_to_image_lora_sdxl.py \
226
+ --pretrained_model_name_or_path=$MODEL_NAME \
227
+ --pretrained_vae_model_name_or_path=$VAE_NAME \
228
+ --dataset_name=$DATASET_NAME --caption_column="text" \
229
+ --resolution=1024 \
230
+ --train_batch_size=1 \
231
+ --num_train_epochs=2 \
232
+ --checkpointing_steps=2 \
233
+ --learning_rate=1e-04 \
234
+ --lr_scheduler="constant" \
235
+ --lr_warmup_steps=0 \
236
+ --mixed_precision="fp16" \
237
+ --max_train_steps=20 \
238
+ --validation_epochs=20 \
239
+ --seed=1234 \
240
+ --output_dir="sd-naruto-model-lora-sdxl" \
241
+ --validation_prompt="cute dragon creature"
242
+ ```
243
+
244
+
245
+ ### Finetuning the text encoder and UNet
246
+
247
+ The script also allows you to finetune the `text_encoder` along with the `unet`.
248
+
249
+ 🚨 Training the text encoder requires additional memory.
250
+
251
+ Pass the `--train_text_encoder` argument to the training script to enable finetuning the `text_encoder` and `unet`:
252
+
253
+ ```bash
254
+ accelerate launch train_text_to_image_lora_sdxl.py \
255
+ --pretrained_model_name_or_path=$MODEL_NAME \
256
+ --dataset_name=$DATASET_NAME --caption_column="text" \
257
+ --resolution=1024 --random_flip \
258
+ --train_batch_size=1 \
259
+ --num_train_epochs=2 --checkpointing_steps=500 \
260
+ --learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
261
+ --seed=42 \
262
+ --output_dir="sd-naruto-model-lora-sdxl-txt" \
263
+ --train_text_encoder \
264
+ --validation_prompt="cute dragon creature" --report_to="wandb" \
265
+ --push_to_hub
266
+ ```
267
+
268
+ ### Inference
269
+
270
+ Once you have trained a model using above command, the inference can be done simply using the `DiffusionPipeline` after loading the trained LoRA weights. You
271
+ need to pass the `output_dir` for loading the LoRA weights which, in this case, is `sd-naruto-model-lora-sdxl`.
272
+
273
+ ```python
274
+ from diffusers import DiffusionPipeline
275
+ import torch
276
+
277
+ model_path = "takuoko/sd-naruto-model-lora-sdxl"
278
+ pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
279
+ pipe.to("cuda")
280
+ pipe.load_lora_weights(model_path)
281
+
282
+ prompt = "A naruto with green eyes and red legs."
283
+ image = pipe(prompt, num_inference_steps=30, guidance_scale=7.5).images[0]
284
+ image.save("naruto.png")
285
+ ```
requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ accelerate>=0.16.0
2
+ torchvision
3
+ transformers>=4.25.1
4
+ datasets>=2.19.1
5
+ ftfy
6
+ tensorboard
7
+ Jinja2
8
+ peft==0.7.0
requirements_flax.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ transformers>=4.25.1
2
+ datasets
3
+ flax
4
+ optax
5
+ torch
6
+ torchvision
7
+ ftfy
8
+ tensorboard
9
+ Jinja2
requirements_sdxl.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ accelerate>=0.22.0
2
+ torchvision
3
+ transformers>=4.25.1
4
+ ftfy
5
+ tensorboard
6
+ Jinja2
7
+ datasets
8
+ peft==0.7.0
test_text_to_image.py ADDED
@@ -0,0 +1,365 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ import logging
18
+ import os
19
+ import shutil
20
+ import sys
21
+ import tempfile
22
+
23
+ from diffusers import DiffusionPipeline, UNet2DConditionModel # noqa: E402
24
+
25
+
26
+ sys.path.append("..")
27
+ from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402
28
+
29
+
30
+ logging.basicConfig(level=logging.DEBUG)
31
+
32
+ logger = logging.getLogger()
33
+ stream_handler = logging.StreamHandler(sys.stdout)
34
+ logger.addHandler(stream_handler)
35
+
36
+
37
+ class TextToImage(ExamplesTestsAccelerate):
38
+ def test_text_to_image(self):
39
+ with tempfile.TemporaryDirectory() as tmpdir:
40
+ test_args = f"""
41
+ examples/text_to_image/train_text_to_image.py
42
+ --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-pipe
43
+ --dataset_name hf-internal-testing/dummy_image_text_data
44
+ --resolution 64
45
+ --center_crop
46
+ --random_flip
47
+ --train_batch_size 1
48
+ --gradient_accumulation_steps 1
49
+ --max_train_steps 2
50
+ --learning_rate 5.0e-04
51
+ --scale_lr
52
+ --lr_scheduler constant
53
+ --lr_warmup_steps 0
54
+ --output_dir {tmpdir}
55
+ """.split()
56
+
57
+ run_command(self._launch_args + test_args)
58
+ # save_pretrained smoke test
59
+ self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.safetensors")))
60
+ self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json")))
61
+
62
+ def test_text_to_image_checkpointing(self):
63
+ pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
64
+ prompt = "a prompt"
65
+
66
+ with tempfile.TemporaryDirectory() as tmpdir:
67
+ # Run training script with checkpointing
68
+ # max_train_steps == 4, checkpointing_steps == 2
69
+ # Should create checkpoints at steps 2, 4
70
+
71
+ initial_run_args = f"""
72
+ examples/text_to_image/train_text_to_image.py
73
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
74
+ --dataset_name hf-internal-testing/dummy_image_text_data
75
+ --resolution 64
76
+ --center_crop
77
+ --random_flip
78
+ --train_batch_size 1
79
+ --gradient_accumulation_steps 1
80
+ --max_train_steps 4
81
+ --learning_rate 5.0e-04
82
+ --scale_lr
83
+ --lr_scheduler constant
84
+ --lr_warmup_steps 0
85
+ --output_dir {tmpdir}
86
+ --checkpointing_steps=2
87
+ --seed=0
88
+ """.split()
89
+
90
+ run_command(self._launch_args + initial_run_args)
91
+
92
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
93
+ pipe(prompt, num_inference_steps=1)
94
+
95
+ # check checkpoint directories exist
96
+ self.assertEqual(
97
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
98
+ {"checkpoint-2", "checkpoint-4"},
99
+ )
100
+
101
+ # check can run an intermediate checkpoint
102
+ unet = UNet2DConditionModel.from_pretrained(tmpdir, subfolder="checkpoint-2/unet")
103
+ pipe = DiffusionPipeline.from_pretrained(pretrained_model_name_or_path, unet=unet, safety_checker=None)
104
+ pipe(prompt, num_inference_steps=1)
105
+
106
+ # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming
107
+ shutil.rmtree(os.path.join(tmpdir, "checkpoint-2"))
108
+
109
+ # Run training script for 2 total steps resuming from checkpoint 4
110
+
111
+ resume_run_args = f"""
112
+ examples/text_to_image/train_text_to_image.py
113
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
114
+ --dataset_name hf-internal-testing/dummy_image_text_data
115
+ --resolution 64
116
+ --center_crop
117
+ --random_flip
118
+ --train_batch_size 1
119
+ --gradient_accumulation_steps 1
120
+ --max_train_steps 2
121
+ --learning_rate 5.0e-04
122
+ --scale_lr
123
+ --lr_scheduler constant
124
+ --lr_warmup_steps 0
125
+ --output_dir {tmpdir}
126
+ --checkpointing_steps=1
127
+ --resume_from_checkpoint=checkpoint-4
128
+ --seed=0
129
+ """.split()
130
+
131
+ run_command(self._launch_args + resume_run_args)
132
+
133
+ # check can run new fully trained pipeline
134
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
135
+ pipe(prompt, num_inference_steps=1)
136
+
137
+ # no checkpoint-2 -> check old checkpoints do not exist
138
+ # check new checkpoints exist
139
+ self.assertEqual(
140
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
141
+ {"checkpoint-4", "checkpoint-5"},
142
+ )
143
+
144
+ def test_text_to_image_checkpointing_use_ema(self):
145
+ pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
146
+ prompt = "a prompt"
147
+
148
+ with tempfile.TemporaryDirectory() as tmpdir:
149
+ # Run training script with checkpointing
150
+ # max_train_steps == 4, checkpointing_steps == 2
151
+ # Should create checkpoints at steps 2, 4
152
+
153
+ initial_run_args = f"""
154
+ examples/text_to_image/train_text_to_image.py
155
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
156
+ --dataset_name hf-internal-testing/dummy_image_text_data
157
+ --resolution 64
158
+ --center_crop
159
+ --random_flip
160
+ --train_batch_size 1
161
+ --gradient_accumulation_steps 1
162
+ --max_train_steps 4
163
+ --learning_rate 5.0e-04
164
+ --scale_lr
165
+ --lr_scheduler constant
166
+ --lr_warmup_steps 0
167
+ --output_dir {tmpdir}
168
+ --checkpointing_steps=2
169
+ --use_ema
170
+ --seed=0
171
+ """.split()
172
+
173
+ run_command(self._launch_args + initial_run_args)
174
+
175
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
176
+ pipe(prompt, num_inference_steps=2)
177
+
178
+ # check checkpoint directories exist
179
+ self.assertEqual(
180
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
181
+ {"checkpoint-2", "checkpoint-4"},
182
+ )
183
+
184
+ # check can run an intermediate checkpoint
185
+ unet = UNet2DConditionModel.from_pretrained(tmpdir, subfolder="checkpoint-2/unet")
186
+ pipe = DiffusionPipeline.from_pretrained(pretrained_model_name_or_path, unet=unet, safety_checker=None)
187
+ pipe(prompt, num_inference_steps=1)
188
+
189
+ # Remove checkpoint 2 so that we can check only later checkpoints exist after resuming
190
+ shutil.rmtree(os.path.join(tmpdir, "checkpoint-2"))
191
+
192
+ # Run training script for 2 total steps resuming from checkpoint 4
193
+
194
+ resume_run_args = f"""
195
+ examples/text_to_image/train_text_to_image.py
196
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
197
+ --dataset_name hf-internal-testing/dummy_image_text_data
198
+ --resolution 64
199
+ --center_crop
200
+ --random_flip
201
+ --train_batch_size 1
202
+ --gradient_accumulation_steps 1
203
+ --max_train_steps 2
204
+ --learning_rate 5.0e-04
205
+ --scale_lr
206
+ --lr_scheduler constant
207
+ --lr_warmup_steps 0
208
+ --output_dir {tmpdir}
209
+ --checkpointing_steps=1
210
+ --resume_from_checkpoint=checkpoint-4
211
+ --use_ema
212
+ --seed=0
213
+ """.split()
214
+
215
+ run_command(self._launch_args + resume_run_args)
216
+
217
+ # check can run new fully trained pipeline
218
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
219
+ pipe(prompt, num_inference_steps=1)
220
+
221
+ # no checkpoint-2 -> check old checkpoints do not exist
222
+ # check new checkpoints exist
223
+ self.assertEqual(
224
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
225
+ {"checkpoint-4", "checkpoint-5"},
226
+ )
227
+
228
+ def test_text_to_image_checkpointing_checkpoints_total_limit(self):
229
+ pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
230
+ prompt = "a prompt"
231
+
232
+ with tempfile.TemporaryDirectory() as tmpdir:
233
+ # Run training script with checkpointing
234
+ # max_train_steps == 6, checkpointing_steps == 2, checkpoints_total_limit == 2
235
+ # Should create checkpoints at steps 2, 4, 6
236
+ # with checkpoint at step 2 deleted
237
+
238
+ initial_run_args = f"""
239
+ examples/text_to_image/train_text_to_image.py
240
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
241
+ --dataset_name hf-internal-testing/dummy_image_text_data
242
+ --resolution 64
243
+ --center_crop
244
+ --random_flip
245
+ --train_batch_size 1
246
+ --gradient_accumulation_steps 1
247
+ --max_train_steps 6
248
+ --learning_rate 5.0e-04
249
+ --scale_lr
250
+ --lr_scheduler constant
251
+ --lr_warmup_steps 0
252
+ --output_dir {tmpdir}
253
+ --checkpointing_steps=2
254
+ --checkpoints_total_limit=2
255
+ --seed=0
256
+ """.split()
257
+
258
+ run_command(self._launch_args + initial_run_args)
259
+
260
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
261
+ pipe(prompt, num_inference_steps=1)
262
+
263
+ # check checkpoint directories exist
264
+ # checkpoint-2 should have been deleted
265
+ self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"})
266
+
267
+ def test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
268
+ pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
269
+ prompt = "a prompt"
270
+
271
+ with tempfile.TemporaryDirectory() as tmpdir:
272
+ # Run training script with checkpointing
273
+ # max_train_steps == 4, checkpointing_steps == 2
274
+ # Should create checkpoints at steps 2, 4
275
+
276
+ initial_run_args = f"""
277
+ examples/text_to_image/train_text_to_image.py
278
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
279
+ --dataset_name hf-internal-testing/dummy_image_text_data
280
+ --resolution 64
281
+ --center_crop
282
+ --random_flip
283
+ --train_batch_size 1
284
+ --gradient_accumulation_steps 1
285
+ --max_train_steps 4
286
+ --learning_rate 5.0e-04
287
+ --scale_lr
288
+ --lr_scheduler constant
289
+ --lr_warmup_steps 0
290
+ --output_dir {tmpdir}
291
+ --checkpointing_steps=2
292
+ --seed=0
293
+ """.split()
294
+
295
+ run_command(self._launch_args + initial_run_args)
296
+
297
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
298
+ pipe(prompt, num_inference_steps=1)
299
+
300
+ # check checkpoint directories exist
301
+ self.assertEqual(
302
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
303
+ {"checkpoint-2", "checkpoint-4"},
304
+ )
305
+
306
+ # resume and we should try to checkpoint at 6, where we'll have to remove
307
+ # checkpoint-2 and checkpoint-4 instead of just a single previous checkpoint
308
+
309
+ resume_run_args = f"""
310
+ examples/text_to_image/train_text_to_image.py
311
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
312
+ --dataset_name hf-internal-testing/dummy_image_text_data
313
+ --resolution 64
314
+ --center_crop
315
+ --random_flip
316
+ --train_batch_size 1
317
+ --gradient_accumulation_steps 1
318
+ --max_train_steps 8
319
+ --learning_rate 5.0e-04
320
+ --scale_lr
321
+ --lr_scheduler constant
322
+ --lr_warmup_steps 0
323
+ --output_dir {tmpdir}
324
+ --checkpointing_steps=2
325
+ --resume_from_checkpoint=checkpoint-4
326
+ --checkpoints_total_limit=2
327
+ --seed=0
328
+ """.split()
329
+
330
+ run_command(self._launch_args + resume_run_args)
331
+
332
+ pipe = DiffusionPipeline.from_pretrained(tmpdir, safety_checker=None)
333
+ pipe(prompt, num_inference_steps=1)
334
+
335
+ # check checkpoint directories exist
336
+ self.assertEqual(
337
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
338
+ {"checkpoint-6", "checkpoint-8"},
339
+ )
340
+
341
+
342
+ class TextToImageSDXL(ExamplesTestsAccelerate):
343
+ def test_text_to_image_sdxl(self):
344
+ with tempfile.TemporaryDirectory() as tmpdir:
345
+ test_args = f"""
346
+ examples/text_to_image/train_text_to_image_sdxl.py
347
+ --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-xl-pipe
348
+ --dataset_name hf-internal-testing/dummy_image_text_data
349
+ --resolution 64
350
+ --center_crop
351
+ --random_flip
352
+ --train_batch_size 1
353
+ --gradient_accumulation_steps 1
354
+ --max_train_steps 2
355
+ --learning_rate 5.0e-04
356
+ --scale_lr
357
+ --lr_scheduler constant
358
+ --lr_warmup_steps 0
359
+ --output_dir {tmpdir}
360
+ """.split()
361
+
362
+ run_command(self._launch_args + test_args)
363
+ # save_pretrained smoke test
364
+ self.assertTrue(os.path.isfile(os.path.join(tmpdir, "unet", "diffusion_pytorch_model.safetensors")))
365
+ self.assertTrue(os.path.isfile(os.path.join(tmpdir, "scheduler", "scheduler_config.json")))
test_text_to_image_lora.py ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ import logging
18
+ import os
19
+ import sys
20
+ import tempfile
21
+
22
+ import safetensors
23
+
24
+ from diffusers import DiffusionPipeline # noqa: E402
25
+
26
+
27
+ sys.path.append("..")
28
+ from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402
29
+
30
+
31
+ logging.basicConfig(level=logging.DEBUG)
32
+
33
+ logger = logging.getLogger()
34
+ stream_handler = logging.StreamHandler(sys.stdout)
35
+ logger.addHandler(stream_handler)
36
+
37
+
38
+ class TextToImageLoRA(ExamplesTestsAccelerate):
39
+ def test_text_to_image_lora_sdxl_checkpointing_checkpoints_total_limit(self):
40
+ prompt = "a prompt"
41
+ pipeline_path = "hf-internal-testing/tiny-stable-diffusion-xl-pipe"
42
+
43
+ with tempfile.TemporaryDirectory() as tmpdir:
44
+ # Run training script with checkpointing
45
+ # max_train_steps == 6, checkpointing_steps == 2, checkpoints_total_limit == 2
46
+ # Should create checkpoints at steps 2, 4, 6
47
+ # with checkpoint at step 2 deleted
48
+
49
+ initial_run_args = f"""
50
+ examples/text_to_image/train_text_to_image_lora_sdxl.py
51
+ --pretrained_model_name_or_path {pipeline_path}
52
+ --dataset_name hf-internal-testing/dummy_image_text_data
53
+ --resolution 64
54
+ --train_batch_size 1
55
+ --gradient_accumulation_steps 1
56
+ --max_train_steps 6
57
+ --learning_rate 5.0e-04
58
+ --scale_lr
59
+ --lr_scheduler constant
60
+ --lr_warmup_steps 0
61
+ --output_dir {tmpdir}
62
+ --checkpointing_steps=2
63
+ --checkpoints_total_limit=2
64
+ """.split()
65
+
66
+ run_command(self._launch_args + initial_run_args)
67
+
68
+ pipe = DiffusionPipeline.from_pretrained(pipeline_path)
69
+ pipe.load_lora_weights(tmpdir)
70
+ pipe(prompt, num_inference_steps=1)
71
+
72
+ # check checkpoint directories exist
73
+ # checkpoint-2 should have been deleted
74
+ self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"})
75
+
76
+ def test_text_to_image_lora_checkpointing_checkpoints_total_limit(self):
77
+ pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
78
+ prompt = "a prompt"
79
+
80
+ with tempfile.TemporaryDirectory() as tmpdir:
81
+ # Run training script with checkpointing
82
+ # max_train_steps == 6, checkpointing_steps == 2, checkpoints_total_limit == 2
83
+ # Should create checkpoints at steps 2, 4, 6
84
+ # with checkpoint at step 2 deleted
85
+
86
+ initial_run_args = f"""
87
+ examples/text_to_image/train_text_to_image_lora.py
88
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
89
+ --dataset_name hf-internal-testing/dummy_image_text_data
90
+ --resolution 64
91
+ --center_crop
92
+ --random_flip
93
+ --train_batch_size 1
94
+ --gradient_accumulation_steps 1
95
+ --max_train_steps 6
96
+ --learning_rate 5.0e-04
97
+ --scale_lr
98
+ --lr_scheduler constant
99
+ --lr_warmup_steps 0
100
+ --output_dir {tmpdir}
101
+ --checkpointing_steps=2
102
+ --checkpoints_total_limit=2
103
+ --seed=0
104
+ --num_validation_images=0
105
+ """.split()
106
+
107
+ run_command(self._launch_args + initial_run_args)
108
+
109
+ pipe = DiffusionPipeline.from_pretrained(
110
+ "hf-internal-testing/tiny-stable-diffusion-pipe", safety_checker=None
111
+ )
112
+ pipe.load_lora_weights(tmpdir)
113
+ pipe(prompt, num_inference_steps=1)
114
+
115
+ # check checkpoint directories exist
116
+ # checkpoint-2 should have been deleted
117
+ self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"})
118
+
119
+ def test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints(self):
120
+ pretrained_model_name_or_path = "hf-internal-testing/tiny-stable-diffusion-pipe"
121
+ prompt = "a prompt"
122
+
123
+ with tempfile.TemporaryDirectory() as tmpdir:
124
+ # Run training script with checkpointing
125
+ # max_train_steps == 4, checkpointing_steps == 2
126
+ # Should create checkpoints at steps 2, 4
127
+
128
+ initial_run_args = f"""
129
+ examples/text_to_image/train_text_to_image_lora.py
130
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
131
+ --dataset_name hf-internal-testing/dummy_image_text_data
132
+ --resolution 64
133
+ --center_crop
134
+ --random_flip
135
+ --train_batch_size 1
136
+ --gradient_accumulation_steps 1
137
+ --max_train_steps 4
138
+ --learning_rate 5.0e-04
139
+ --scale_lr
140
+ --lr_scheduler constant
141
+ --lr_warmup_steps 0
142
+ --output_dir {tmpdir}
143
+ --checkpointing_steps=2
144
+ --seed=0
145
+ --num_validation_images=0
146
+ """.split()
147
+
148
+ run_command(self._launch_args + initial_run_args)
149
+
150
+ pipe = DiffusionPipeline.from_pretrained(
151
+ "hf-internal-testing/tiny-stable-diffusion-pipe", safety_checker=None
152
+ )
153
+ pipe.load_lora_weights(tmpdir)
154
+ pipe(prompt, num_inference_steps=1)
155
+
156
+ # check checkpoint directories exist
157
+ self.assertEqual(
158
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
159
+ {"checkpoint-2", "checkpoint-4"},
160
+ )
161
+
162
+ # resume and we should try to checkpoint at 6, where we'll have to remove
163
+ # checkpoint-2 and checkpoint-4 instead of just a single previous checkpoint
164
+
165
+ resume_run_args = f"""
166
+ examples/text_to_image/train_text_to_image_lora.py
167
+ --pretrained_model_name_or_path {pretrained_model_name_or_path}
168
+ --dataset_name hf-internal-testing/dummy_image_text_data
169
+ --resolution 64
170
+ --center_crop
171
+ --random_flip
172
+ --train_batch_size 1
173
+ --gradient_accumulation_steps 1
174
+ --max_train_steps 8
175
+ --learning_rate 5.0e-04
176
+ --scale_lr
177
+ --lr_scheduler constant
178
+ --lr_warmup_steps 0
179
+ --output_dir {tmpdir}
180
+ --checkpointing_steps=2
181
+ --resume_from_checkpoint=checkpoint-4
182
+ --checkpoints_total_limit=2
183
+ --seed=0
184
+ --num_validation_images=0
185
+ """.split()
186
+
187
+ run_command(self._launch_args + resume_run_args)
188
+
189
+ pipe = DiffusionPipeline.from_pretrained(
190
+ "hf-internal-testing/tiny-stable-diffusion-pipe", safety_checker=None
191
+ )
192
+ pipe.load_lora_weights(tmpdir)
193
+ pipe(prompt, num_inference_steps=1)
194
+
195
+ # check checkpoint directories exist
196
+ self.assertEqual(
197
+ {x for x in os.listdir(tmpdir) if "checkpoint" in x},
198
+ {"checkpoint-6", "checkpoint-8"},
199
+ )
200
+
201
+
202
+ class TextToImageLoRASDXL(ExamplesTestsAccelerate):
203
+ def test_text_to_image_lora_sdxl(self):
204
+ with tempfile.TemporaryDirectory() as tmpdir:
205
+ test_args = f"""
206
+ examples/text_to_image/train_text_to_image_lora_sdxl.py
207
+ --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-xl-pipe
208
+ --dataset_name hf-internal-testing/dummy_image_text_data
209
+ --resolution 64
210
+ --train_batch_size 1
211
+ --gradient_accumulation_steps 1
212
+ --max_train_steps 2
213
+ --learning_rate 5.0e-04
214
+ --scale_lr
215
+ --lr_scheduler constant
216
+ --lr_warmup_steps 0
217
+ --output_dir {tmpdir}
218
+ """.split()
219
+
220
+ run_command(self._launch_args + test_args)
221
+ # save_pretrained smoke test
222
+ self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
223
+
224
+ # make sure the state_dict has the correct naming in the parameters.
225
+ lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
226
+ is_lora = all("lora" in k for k in lora_state_dict.keys())
227
+ self.assertTrue(is_lora)
228
+
229
+ def test_text_to_image_lora_sdxl_with_text_encoder(self):
230
+ with tempfile.TemporaryDirectory() as tmpdir:
231
+ test_args = f"""
232
+ examples/text_to_image/train_text_to_image_lora_sdxl.py
233
+ --pretrained_model_name_or_path hf-internal-testing/tiny-stable-diffusion-xl-pipe
234
+ --dataset_name hf-internal-testing/dummy_image_text_data
235
+ --resolution 64
236
+ --train_batch_size 1
237
+ --gradient_accumulation_steps 1
238
+ --max_train_steps 2
239
+ --learning_rate 5.0e-04
240
+ --scale_lr
241
+ --lr_scheduler constant
242
+ --lr_warmup_steps 0
243
+ --output_dir {tmpdir}
244
+ --train_text_encoder
245
+ """.split()
246
+
247
+ run_command(self._launch_args + test_args)
248
+ # save_pretrained smoke test
249
+ self.assertTrue(os.path.isfile(os.path.join(tmpdir, "pytorch_lora_weights.safetensors")))
250
+
251
+ # make sure the state_dict has the correct naming in the parameters.
252
+ lora_state_dict = safetensors.torch.load_file(os.path.join(tmpdir, "pytorch_lora_weights.safetensors"))
253
+ is_lora = all("lora" in k for k in lora_state_dict.keys())
254
+ self.assertTrue(is_lora)
255
+
256
+ # when not training the text encoder, all the parameters in the state dict should start
257
+ # with `"unet"` or `"text_encoder"` or `"text_encoder_2"` in their names.
258
+ keys = lora_state_dict.keys()
259
+ starts_with_unet = all(
260
+ k.startswith("unet") or k.startswith("text_encoder") or k.startswith("text_encoder_2") for k in keys
261
+ )
262
+ self.assertTrue(starts_with_unet)
263
+
264
+ def test_text_to_image_lora_sdxl_text_encoder_checkpointing_checkpoints_total_limit(self):
265
+ prompt = "a prompt"
266
+ pipeline_path = "hf-internal-testing/tiny-stable-diffusion-xl-pipe"
267
+
268
+ with tempfile.TemporaryDirectory() as tmpdir:
269
+ # Run training script with checkpointing
270
+ # max_train_steps == 6, checkpointing_steps == 2, checkpoints_total_limit == 2
271
+ # Should create checkpoints at steps 2, 4, 6
272
+ # with checkpoint at step 2 deleted
273
+
274
+ initial_run_args = f"""
275
+ examples/text_to_image/train_text_to_image_lora_sdxl.py
276
+ --pretrained_model_name_or_path {pipeline_path}
277
+ --dataset_name hf-internal-testing/dummy_image_text_data
278
+ --resolution 64
279
+ --train_batch_size 1
280
+ --gradient_accumulation_steps 1
281
+ --max_train_steps 6
282
+ --learning_rate 5.0e-04
283
+ --scale_lr
284
+ --lr_scheduler constant
285
+ --train_text_encoder
286
+ --lr_warmup_steps 0
287
+ --output_dir {tmpdir}
288
+ --checkpointing_steps=2
289
+ --checkpoints_total_limit=2
290
+ """.split()
291
+
292
+ run_command(self._launch_args + initial_run_args)
293
+
294
+ pipe = DiffusionPipeline.from_pretrained(pipeline_path)
295
+ pipe.load_lora_weights(tmpdir)
296
+ pipe(prompt, num_inference_steps=1)
297
+
298
+ # check checkpoint directories exist
299
+ # checkpoint-2 should have been deleted
300
+ self.assertEqual({x for x in os.listdir(tmpdir) if "checkpoint" in x}, {"checkpoint-4", "checkpoint-6"})
train_text_to_image.py ADDED
@@ -0,0 +1,1153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ import argparse
18
+ import logging
19
+ import math
20
+ import os
21
+ import random
22
+ import shutil
23
+ from contextlib import nullcontext
24
+ from pathlib import Path
25
+
26
+ import accelerate
27
+ import datasets
28
+ import numpy as np
29
+ import torch
30
+ import torch.nn.functional as F
31
+ import torch.utils.checkpoint
32
+ import transformers
33
+ from accelerate import Accelerator
34
+ from accelerate.logging import get_logger
35
+ from accelerate.state import AcceleratorState
36
+ from accelerate.utils import ProjectConfiguration, set_seed
37
+ from datasets import load_dataset
38
+ from huggingface_hub import create_repo, upload_folder
39
+ from packaging import version
40
+ from torchvision import transforms
41
+ from tqdm.auto import tqdm
42
+ from transformers import CLIPTextModel, CLIPTokenizer
43
+ from transformers.utils import ContextManagers
44
+
45
+ import diffusers
46
+ from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
47
+ from diffusers.optimization import get_scheduler
48
+ from diffusers.training_utils import EMAModel, compute_dream_and_update_latents, compute_snr
49
+ from diffusers.utils import check_min_version, deprecate, is_wandb_available, make_image_grid
50
+ from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
51
+ from diffusers.utils.import_utils import is_xformers_available
52
+ from diffusers.utils.torch_utils import is_compiled_module
53
+
54
+
55
+ if is_wandb_available():
56
+ import wandb
57
+
58
+
59
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
60
+ check_min_version("0.33.0.dev0")
61
+
62
+ logger = get_logger(__name__, log_level="INFO")
63
+
64
+ DATASET_NAME_MAPPING = {
65
+ "lambdalabs/naruto-blip-captions": ("image", "text"),
66
+ }
67
+
68
+
69
+ def save_model_card(
70
+ args,
71
+ repo_id: str,
72
+ images: list = None,
73
+ repo_folder: str = None,
74
+ ):
75
+ img_str = ""
76
+ if len(images) > 0:
77
+ image_grid = make_image_grid(images, 1, len(args.validation_prompts))
78
+ image_grid.save(os.path.join(repo_folder, "val_imgs_grid.png"))
79
+ img_str += "![val_imgs_grid](./val_imgs_grid.png)\n"
80
+
81
+ model_description = f"""
82
+ # Text-to-image finetuning - {repo_id}
83
+
84
+ This pipeline was finetuned from **{args.pretrained_model_name_or_path}** on the **{args.dataset_name}** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: {args.validation_prompts}: \n
85
+ {img_str}
86
+
87
+ ## Pipeline usage
88
+
89
+ You can use the pipeline like so:
90
+
91
+ ```python
92
+ from diffusers import DiffusionPipeline
93
+ import torch
94
+
95
+ pipeline = DiffusionPipeline.from_pretrained("{repo_id}", torch_dtype=torch.float16)
96
+ prompt = "{args.validation_prompts[0]}"
97
+ image = pipeline(prompt).images[0]
98
+ image.save("my_image.png")
99
+ ```
100
+
101
+ ## Training info
102
+
103
+ These are the key hyperparameters used during training:
104
+
105
+ * Epochs: {args.num_train_epochs}
106
+ * Learning rate: {args.learning_rate}
107
+ * Batch size: {args.train_batch_size}
108
+ * Gradient accumulation steps: {args.gradient_accumulation_steps}
109
+ * Image resolution: {args.resolution}
110
+ * Mixed-precision: {args.mixed_precision}
111
+
112
+ """
113
+ wandb_info = ""
114
+ if is_wandb_available():
115
+ wandb_run_url = None
116
+ if wandb.run is not None:
117
+ wandb_run_url = wandb.run.url
118
+
119
+ if wandb_run_url is not None:
120
+ wandb_info = f"""
121
+ More information on all the CLI arguments and the environment are available on your [`wandb` run page]({wandb_run_url}).
122
+ """
123
+
124
+ model_description += wandb_info
125
+
126
+ model_card = load_or_create_model_card(
127
+ repo_id_or_path=repo_id,
128
+ from_training=True,
129
+ license="creativeml-openrail-m",
130
+ base_model=args.pretrained_model_name_or_path,
131
+ model_description=model_description,
132
+ inference=True,
133
+ )
134
+
135
+ tags = ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "diffusers-training"]
136
+ model_card = populate_model_card(model_card, tags=tags)
137
+
138
+ model_card.save(os.path.join(repo_folder, "README.md"))
139
+
140
+
141
+ def log_validation(vae, text_encoder, tokenizer, unet, args, accelerator, weight_dtype, epoch):
142
+ logger.info("Running validation... ")
143
+
144
+ pipeline = StableDiffusionPipeline.from_pretrained(
145
+ args.pretrained_model_name_or_path,
146
+ vae=accelerator.unwrap_model(vae),
147
+ text_encoder=accelerator.unwrap_model(text_encoder),
148
+ tokenizer=tokenizer,
149
+ unet=accelerator.unwrap_model(unet),
150
+ safety_checker=None,
151
+ revision=args.revision,
152
+ variant=args.variant,
153
+ torch_dtype=weight_dtype,
154
+ )
155
+ pipeline = pipeline.to(accelerator.device)
156
+ pipeline.set_progress_bar_config(disable=True)
157
+
158
+ if args.enable_xformers_memory_efficient_attention:
159
+ pipeline.enable_xformers_memory_efficient_attention()
160
+
161
+ if args.seed is None:
162
+ generator = None
163
+ else:
164
+ generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
165
+
166
+ images = []
167
+ for i in range(len(args.validation_prompts)):
168
+ if torch.backends.mps.is_available():
169
+ autocast_ctx = nullcontext()
170
+ else:
171
+ autocast_ctx = torch.autocast(accelerator.device.type)
172
+
173
+ with autocast_ctx:
174
+ image = pipeline(args.validation_prompts[i], num_inference_steps=20, generator=generator).images[0]
175
+
176
+ images.append(image)
177
+
178
+ for tracker in accelerator.trackers:
179
+ if tracker.name == "tensorboard":
180
+ np_images = np.stack([np.asarray(img) for img in images])
181
+ tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
182
+ elif tracker.name == "wandb":
183
+ tracker.log(
184
+ {
185
+ "validation": [
186
+ wandb.Image(image, caption=f"{i}: {args.validation_prompts[i]}")
187
+ for i, image in enumerate(images)
188
+ ]
189
+ }
190
+ )
191
+ else:
192
+ logger.warning(f"image logging not implemented for {tracker.name}")
193
+
194
+ del pipeline
195
+ torch.cuda.empty_cache()
196
+
197
+ return images
198
+
199
+
200
+ def parse_args():
201
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
202
+ parser.add_argument(
203
+ "--input_perturbation", type=float, default=0, help="The scale of input perturbation. Recommended 0.1."
204
+ )
205
+ parser.add_argument(
206
+ "--pretrained_model_name_or_path",
207
+ type=str,
208
+ default=None,
209
+ required=True,
210
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
211
+ )
212
+ parser.add_argument(
213
+ "--revision",
214
+ type=str,
215
+ default=None,
216
+ required=False,
217
+ help="Revision of pretrained model identifier from huggingface.co/models.",
218
+ )
219
+ parser.add_argument(
220
+ "--variant",
221
+ type=str,
222
+ default=None,
223
+ help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16",
224
+ )
225
+ parser.add_argument(
226
+ "--dataset_name",
227
+ type=str,
228
+ default=None,
229
+ help=(
230
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
231
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
232
+ " or to a folder containing files that 🤗 Datasets can understand."
233
+ ),
234
+ )
235
+ parser.add_argument(
236
+ "--dataset_config_name",
237
+ type=str,
238
+ default=None,
239
+ help="The config of the Dataset, leave as None if there's only one config.",
240
+ )
241
+ parser.add_argument(
242
+ "--train_data_dir",
243
+ type=str,
244
+ default=None,
245
+ help=(
246
+ "A folder containing the training data. Folder contents must follow the structure described in"
247
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
248
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
249
+ ),
250
+ )
251
+ parser.add_argument(
252
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
253
+ )
254
+ parser.add_argument(
255
+ "--caption_column",
256
+ type=str,
257
+ default="text",
258
+ help="The column of the dataset containing a caption or a list of captions.",
259
+ )
260
+ parser.add_argument(
261
+ "--max_train_samples",
262
+ type=int,
263
+ default=None,
264
+ help=(
265
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
266
+ "value if set."
267
+ ),
268
+ )
269
+ parser.add_argument(
270
+ "--validation_prompts",
271
+ type=str,
272
+ default=None,
273
+ nargs="+",
274
+ help=("A set of prompts evaluated every `--validation_epochs` and logged to `--report_to`."),
275
+ )
276
+ parser.add_argument(
277
+ "--output_dir",
278
+ type=str,
279
+ default="sd-model-finetuned",
280
+ help="The output directory where the model predictions and checkpoints will be written.",
281
+ )
282
+ parser.add_argument(
283
+ "--cache_dir",
284
+ type=str,
285
+ default=None,
286
+ help="The directory where the downloaded models and datasets will be stored.",
287
+ )
288
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
289
+ parser.add_argument(
290
+ "--resolution",
291
+ type=int,
292
+ default=512,
293
+ help=(
294
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
295
+ " resolution"
296
+ ),
297
+ )
298
+ parser.add_argument(
299
+ "--center_crop",
300
+ default=False,
301
+ action="store_true",
302
+ help=(
303
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
304
+ " cropped. The images will be resized to the resolution first before cropping."
305
+ ),
306
+ )
307
+ parser.add_argument(
308
+ "--random_flip",
309
+ action="store_true",
310
+ help="whether to randomly flip images horizontally",
311
+ )
312
+ parser.add_argument(
313
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
314
+ )
315
+ parser.add_argument("--num_train_epochs", type=int, default=100)
316
+ parser.add_argument(
317
+ "--max_train_steps",
318
+ type=int,
319
+ default=None,
320
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
321
+ )
322
+ parser.add_argument(
323
+ "--gradient_accumulation_steps",
324
+ type=int,
325
+ default=1,
326
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
327
+ )
328
+ parser.add_argument(
329
+ "--gradient_checkpointing",
330
+ action="store_true",
331
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
332
+ )
333
+ parser.add_argument(
334
+ "--learning_rate",
335
+ type=float,
336
+ default=1e-4,
337
+ help="Initial learning rate (after the potential warmup period) to use.",
338
+ )
339
+ parser.add_argument(
340
+ "--scale_lr",
341
+ action="store_true",
342
+ default=False,
343
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
344
+ )
345
+ parser.add_argument(
346
+ "--lr_scheduler",
347
+ type=str,
348
+ default="constant",
349
+ help=(
350
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
351
+ ' "constant", "constant_with_warmup"]'
352
+ ),
353
+ )
354
+ parser.add_argument(
355
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
356
+ )
357
+ parser.add_argument(
358
+ "--snr_gamma",
359
+ type=float,
360
+ default=None,
361
+ help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. "
362
+ "More details here: https://arxiv.org/abs/2303.09556.",
363
+ )
364
+ parser.add_argument(
365
+ "--dream_training",
366
+ action="store_true",
367
+ help=(
368
+ "Use the DREAM training method, which makes training more efficient and accurate at the "
369
+ "expense of doing an extra forward pass. See: https://arxiv.org/abs/2312.00210"
370
+ ),
371
+ )
372
+ parser.add_argument(
373
+ "--dream_detail_preservation",
374
+ type=float,
375
+ default=1.0,
376
+ help="Dream detail preservation factor p (should be greater than 0; default=1.0, as suggested in the paper)",
377
+ )
378
+ parser.add_argument(
379
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
380
+ )
381
+ parser.add_argument(
382
+ "--allow_tf32",
383
+ action="store_true",
384
+ help=(
385
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
386
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
387
+ ),
388
+ )
389
+ parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.")
390
+ parser.add_argument("--offload_ema", action="store_true", help="Offload EMA model to CPU during training step.")
391
+ parser.add_argument("--foreach_ema", action="store_true", help="Use faster foreach implementation of EMAModel.")
392
+ parser.add_argument(
393
+ "--non_ema_revision",
394
+ type=str,
395
+ default=None,
396
+ required=False,
397
+ help=(
398
+ "Revision of pretrained non-ema model identifier. Must be a branch, tag or git identifier of the local or"
399
+ " remote repository specified with --pretrained_model_name_or_path."
400
+ ),
401
+ )
402
+ parser.add_argument(
403
+ "--dataloader_num_workers",
404
+ type=int,
405
+ default=0,
406
+ help=(
407
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
408
+ ),
409
+ )
410
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
411
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
412
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
413
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
414
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
415
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
416
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
417
+ parser.add_argument(
418
+ "--prediction_type",
419
+ type=str,
420
+ default=None,
421
+ help="The prediction_type that shall be used for training. Choose between 'epsilon' or 'v_prediction' or leave `None`. If left to `None` the default prediction type of the scheduler: `noise_scheduler.config.prediction_type` is chosen.",
422
+ )
423
+ parser.add_argument(
424
+ "--hub_model_id",
425
+ type=str,
426
+ default=None,
427
+ help="The name of the repository to keep in sync with the local `output_dir`.",
428
+ )
429
+ parser.add_argument(
430
+ "--logging_dir",
431
+ type=str,
432
+ default="logs",
433
+ help=(
434
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
435
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
436
+ ),
437
+ )
438
+ parser.add_argument(
439
+ "--mixed_precision",
440
+ type=str,
441
+ default=None,
442
+ choices=["no", "fp16", "bf16"],
443
+ help=(
444
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
445
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
446
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
447
+ ),
448
+ )
449
+ parser.add_argument(
450
+ "--report_to",
451
+ type=str,
452
+ default="tensorboard",
453
+ help=(
454
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
455
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
456
+ ),
457
+ )
458
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
459
+ parser.add_argument(
460
+ "--checkpointing_steps",
461
+ type=int,
462
+ default=500,
463
+ help=(
464
+ "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
465
+ " training using `--resume_from_checkpoint`."
466
+ ),
467
+ )
468
+ parser.add_argument(
469
+ "--checkpoints_total_limit",
470
+ type=int,
471
+ default=None,
472
+ help=("Max number of checkpoints to store."),
473
+ )
474
+ parser.add_argument(
475
+ "--resume_from_checkpoint",
476
+ type=str,
477
+ default=None,
478
+ help=(
479
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
480
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
481
+ ),
482
+ )
483
+ parser.add_argument(
484
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
485
+ )
486
+ parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.")
487
+ parser.add_argument(
488
+ "--validation_epochs",
489
+ type=int,
490
+ default=5,
491
+ help="Run validation every X epochs.",
492
+ )
493
+ parser.add_argument(
494
+ "--tracker_project_name",
495
+ type=str,
496
+ default="text2image-fine-tune",
497
+ help=(
498
+ "The `project_name` argument passed to Accelerator.init_trackers for"
499
+ " more information see https://huggingface.co/docs/accelerate/v0.17.0/en/package_reference/accelerator#accelerate.Accelerator"
500
+ ),
501
+ )
502
+
503
+ args = parser.parse_args()
504
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
505
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
506
+ args.local_rank = env_local_rank
507
+
508
+ # Sanity checks
509
+ if args.dataset_name is None and args.train_data_dir is None:
510
+ raise ValueError("Need either a dataset name or a training folder.")
511
+
512
+ # default to using the same revision for the non-ema model if not specified
513
+ if args.non_ema_revision is None:
514
+ args.non_ema_revision = args.revision
515
+
516
+ return args
517
+
518
+
519
+ def main():
520
+ args = parse_args()
521
+
522
+ if args.report_to == "wandb" and args.hub_token is not None:
523
+ raise ValueError(
524
+ "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
525
+ " Please use `huggingface-cli login` to authenticate with the Hub."
526
+ )
527
+
528
+ if args.non_ema_revision is not None:
529
+ deprecate(
530
+ "non_ema_revision!=None",
531
+ "0.15.0",
532
+ message=(
533
+ "Downloading 'non_ema' weights from revision branches of the Hub is deprecated. Please make sure to"
534
+ " use `--variant=non_ema` instead."
535
+ ),
536
+ )
537
+ logging_dir = os.path.join(args.output_dir, args.logging_dir)
538
+
539
+ accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
540
+
541
+ accelerator = Accelerator(
542
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
543
+ mixed_precision=args.mixed_precision,
544
+ log_with=args.report_to,
545
+ project_config=accelerator_project_config,
546
+ )
547
+
548
+ # Disable AMP for MPS.
549
+ if torch.backends.mps.is_available():
550
+ accelerator.native_amp = False
551
+
552
+ # Make one log on every process with the configuration for debugging.
553
+ logging.basicConfig(
554
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
555
+ datefmt="%m/%d/%Y %H:%M:%S",
556
+ level=logging.INFO,
557
+ )
558
+ logger.info(accelerator.state, main_process_only=False)
559
+ if accelerator.is_local_main_process:
560
+ datasets.utils.logging.set_verbosity_warning()
561
+ transformers.utils.logging.set_verbosity_warning()
562
+ diffusers.utils.logging.set_verbosity_info()
563
+ else:
564
+ datasets.utils.logging.set_verbosity_error()
565
+ transformers.utils.logging.set_verbosity_error()
566
+ diffusers.utils.logging.set_verbosity_error()
567
+
568
+ # If passed along, set the training seed now.
569
+ if args.seed is not None:
570
+ set_seed(args.seed)
571
+
572
+ # Handle the repository creation
573
+ if accelerator.is_main_process:
574
+ if args.output_dir is not None:
575
+ os.makedirs(args.output_dir, exist_ok=True)
576
+
577
+ if args.push_to_hub:
578
+ repo_id = create_repo(
579
+ repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
580
+ ).repo_id
581
+
582
+ # Load scheduler, tokenizer and models.
583
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
584
+ tokenizer = CLIPTokenizer.from_pretrained(
585
+ args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
586
+ )
587
+
588
+ def deepspeed_zero_init_disabled_context_manager():
589
+ """
590
+ returns either a context list that includes one that will disable zero.Init or an empty context list
591
+ """
592
+ deepspeed_plugin = AcceleratorState().deepspeed_plugin if accelerate.state.is_initialized() else None
593
+ if deepspeed_plugin is None:
594
+ return []
595
+
596
+ return [deepspeed_plugin.zero3_init_context_manager(enable=False)]
597
+
598
+ # Currently Accelerate doesn't know how to handle multiple models under Deepspeed ZeRO stage 3.
599
+ # For this to work properly all models must be run through `accelerate.prepare`. But accelerate
600
+ # will try to assign the same optimizer with the same weights to all models during
601
+ # `deepspeed.initialize`, which of course doesn't work.
602
+ #
603
+ # For now the following workaround will partially support Deepspeed ZeRO-3, by excluding the 2
604
+ # frozen models from being partitioned during `zero.Init` which gets called during
605
+ # `from_pretrained` So CLIPTextModel and AutoencoderKL will not enjoy the parameter sharding
606
+ # across multiple gpus and only UNet2DConditionModel will get ZeRO sharded.
607
+ with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
608
+ text_encoder = CLIPTextModel.from_pretrained(
609
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant
610
+ )
611
+ vae = AutoencoderKL.from_pretrained(
612
+ args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant
613
+ )
614
+
615
+ unet = UNet2DConditionModel.from_pretrained(
616
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.non_ema_revision
617
+ )
618
+
619
+ # Freeze vae and text_encoder and set unet to trainable
620
+ vae.requires_grad_(False)
621
+ text_encoder.requires_grad_(False)
622
+ unet.train()
623
+
624
+ # Create EMA for the unet.
625
+ if args.use_ema:
626
+ ema_unet = UNet2DConditionModel.from_pretrained(
627
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
628
+ )
629
+ ema_unet = EMAModel(
630
+ ema_unet.parameters(),
631
+ model_cls=UNet2DConditionModel,
632
+ model_config=ema_unet.config,
633
+ foreach=args.foreach_ema,
634
+ )
635
+
636
+ if args.enable_xformers_memory_efficient_attention:
637
+ if is_xformers_available():
638
+ import xformers
639
+
640
+ xformers_version = version.parse(xformers.__version__)
641
+ if xformers_version == version.parse("0.0.16"):
642
+ logger.warning(
643
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
644
+ )
645
+ unet.enable_xformers_memory_efficient_attention()
646
+ else:
647
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
648
+
649
+ # `accelerate` 0.16.0 will have better support for customized saving
650
+ if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
651
+ # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
652
+ def save_model_hook(models, weights, output_dir):
653
+ if accelerator.is_main_process:
654
+ if args.use_ema:
655
+ ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema"))
656
+
657
+ for i, model in enumerate(models):
658
+ model.save_pretrained(os.path.join(output_dir, "unet"))
659
+
660
+ # make sure to pop weight so that corresponding model is not saved again
661
+ weights.pop()
662
+
663
+ def load_model_hook(models, input_dir):
664
+ if args.use_ema:
665
+ load_model = EMAModel.from_pretrained(
666
+ os.path.join(input_dir, "unet_ema"), UNet2DConditionModel, foreach=args.foreach_ema
667
+ )
668
+ ema_unet.load_state_dict(load_model.state_dict())
669
+ if args.offload_ema:
670
+ ema_unet.pin_memory()
671
+ else:
672
+ ema_unet.to(accelerator.device)
673
+ del load_model
674
+
675
+ for _ in range(len(models)):
676
+ # pop models so that they are not loaded again
677
+ model = models.pop()
678
+
679
+ # load diffusers style into model
680
+ load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
681
+ model.register_to_config(**load_model.config)
682
+
683
+ model.load_state_dict(load_model.state_dict())
684
+ del load_model
685
+
686
+ accelerator.register_save_state_pre_hook(save_model_hook)
687
+ accelerator.register_load_state_pre_hook(load_model_hook)
688
+
689
+ if args.gradient_checkpointing:
690
+ unet.enable_gradient_checkpointing()
691
+
692
+ # Enable TF32 for faster training on Ampere GPUs,
693
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
694
+ if args.allow_tf32:
695
+ torch.backends.cuda.matmul.allow_tf32 = True
696
+
697
+ if args.scale_lr:
698
+ args.learning_rate = (
699
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
700
+ )
701
+
702
+ # Initialize the optimizer
703
+ if args.use_8bit_adam:
704
+ try:
705
+ import bitsandbytes as bnb
706
+ except ImportError:
707
+ raise ImportError(
708
+ "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
709
+ )
710
+
711
+ optimizer_cls = bnb.optim.AdamW8bit
712
+ else:
713
+ optimizer_cls = torch.optim.AdamW
714
+
715
+ optimizer = optimizer_cls(
716
+ unet.parameters(),
717
+ lr=args.learning_rate,
718
+ betas=(args.adam_beta1, args.adam_beta2),
719
+ weight_decay=args.adam_weight_decay,
720
+ eps=args.adam_epsilon,
721
+ )
722
+
723
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
724
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
725
+
726
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
727
+ # download the dataset.
728
+ if args.dataset_name is not None:
729
+ # Downloading and loading a dataset from the hub.
730
+ dataset = load_dataset(
731
+ args.dataset_name,
732
+ args.dataset_config_name,
733
+ cache_dir=args.cache_dir,
734
+ data_dir=args.train_data_dir,
735
+ )
736
+ else:
737
+ data_files = {}
738
+ if args.train_data_dir is not None:
739
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
740
+ dataset = load_dataset(
741
+ "imagefolder",
742
+ data_files=data_files,
743
+ cache_dir=args.cache_dir,
744
+ )
745
+ # See more about loading custom images at
746
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
747
+
748
+ # Preprocessing the datasets.
749
+ # We need to tokenize inputs and targets.
750
+ column_names = dataset["train"].column_names
751
+
752
+ # 6. Get the column names for input/target.
753
+ dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
754
+ if args.image_column is None:
755
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
756
+ else:
757
+ image_column = args.image_column
758
+ if image_column not in column_names:
759
+ raise ValueError(
760
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
761
+ )
762
+ if args.caption_column is None:
763
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
764
+ else:
765
+ caption_column = args.caption_column
766
+ if caption_column not in column_names:
767
+ raise ValueError(
768
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
769
+ )
770
+
771
+ # Preprocessing the datasets.
772
+ # We need to tokenize input captions and transform the images.
773
+ def tokenize_captions(examples, is_train=True):
774
+ captions = []
775
+ for caption in examples[caption_column]:
776
+ if isinstance(caption, str):
777
+ captions.append(caption)
778
+ elif isinstance(caption, (list, np.ndarray)):
779
+ # take a random caption if there are multiple
780
+ captions.append(random.choice(caption) if is_train else caption[0])
781
+ else:
782
+ raise ValueError(
783
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
784
+ )
785
+ inputs = tokenizer(
786
+ captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
787
+ )
788
+ return inputs.input_ids
789
+
790
+ # Preprocessing the datasets.
791
+ train_transforms = transforms.Compose(
792
+ [
793
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
794
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
795
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
796
+ transforms.ToTensor(),
797
+ transforms.Normalize([0.5], [0.5]),
798
+ ]
799
+ )
800
+
801
+ def preprocess_train(examples):
802
+ images = [image.convert("RGB") for image in examples[image_column]]
803
+ examples["pixel_values"] = [train_transforms(image) for image in images]
804
+ examples["input_ids"] = tokenize_captions(examples)
805
+ return examples
806
+
807
+ with accelerator.main_process_first():
808
+ if args.max_train_samples is not None:
809
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
810
+ # Set the training transforms
811
+ train_dataset = dataset["train"].with_transform(preprocess_train)
812
+
813
+ def collate_fn(examples):
814
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
815
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
816
+ input_ids = torch.stack([example["input_ids"] for example in examples])
817
+ return {"pixel_values": pixel_values, "input_ids": input_ids}
818
+
819
+ # DataLoaders creation:
820
+ train_dataloader = torch.utils.data.DataLoader(
821
+ train_dataset,
822
+ shuffle=True,
823
+ collate_fn=collate_fn,
824
+ batch_size=args.train_batch_size,
825
+ num_workers=args.dataloader_num_workers,
826
+ )
827
+
828
+ # Scheduler and math around the number of training steps.
829
+ # Check the PR https://github.com/huggingface/diffusers/pull/8312 for detailed explanation.
830
+ num_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes
831
+ if args.max_train_steps is None:
832
+ len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes)
833
+ num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps)
834
+ num_training_steps_for_scheduler = (
835
+ args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes
836
+ )
837
+ else:
838
+ num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes
839
+
840
+ lr_scheduler = get_scheduler(
841
+ args.lr_scheduler,
842
+ optimizer=optimizer,
843
+ num_warmup_steps=num_warmup_steps_for_scheduler,
844
+ num_training_steps=num_training_steps_for_scheduler,
845
+ )
846
+
847
+ # Prepare everything with our `accelerator`.
848
+ unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
849
+ unet, optimizer, train_dataloader, lr_scheduler
850
+ )
851
+
852
+ if args.use_ema:
853
+ if args.offload_ema:
854
+ ema_unet.pin_memory()
855
+ else:
856
+ ema_unet.to(accelerator.device)
857
+
858
+ # For mixed precision training we cast all non-trainable weights (vae, non-lora text_encoder and non-lora unet) to half-precision
859
+ # as these weights are only used for inference, keeping weights in full precision is not required.
860
+ weight_dtype = torch.float32
861
+ if accelerator.mixed_precision == "fp16":
862
+ weight_dtype = torch.float16
863
+ args.mixed_precision = accelerator.mixed_precision
864
+ elif accelerator.mixed_precision == "bf16":
865
+ weight_dtype = torch.bfloat16
866
+ args.mixed_precision = accelerator.mixed_precision
867
+
868
+ # Move text_encode and vae to gpu and cast to weight_dtype
869
+ text_encoder.to(accelerator.device, dtype=weight_dtype)
870
+ vae.to(accelerator.device, dtype=weight_dtype)
871
+
872
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
873
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
874
+ if args.max_train_steps is None:
875
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
876
+ if num_training_steps_for_scheduler != args.max_train_steps * accelerator.num_processes:
877
+ logger.warning(
878
+ f"The length of the 'train_dataloader' after 'accelerator.prepare' ({len(train_dataloader)}) does not match "
879
+ f"the expected length ({len_train_dataloader_after_sharding}) when the learning rate scheduler was created. "
880
+ f"This inconsistency may result in the learning rate scheduler not functioning properly."
881
+ )
882
+ # Afterwards we recalculate our number of training epochs
883
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
884
+
885
+ # We need to initialize the trackers we use, and also store our configuration.
886
+ # The trackers initializes automatically on the main process.
887
+ if accelerator.is_main_process:
888
+ tracker_config = dict(vars(args))
889
+ tracker_config.pop("validation_prompts")
890
+ accelerator.init_trackers(args.tracker_project_name, tracker_config)
891
+
892
+ # Function for unwrapping if model was compiled with `torch.compile`.
893
+ def unwrap_model(model):
894
+ model = accelerator.unwrap_model(model)
895
+ model = model._orig_mod if is_compiled_module(model) else model
896
+ return model
897
+
898
+ # Train!
899
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
900
+
901
+ logger.info("***** Running training *****")
902
+ logger.info(f" Num examples = {len(train_dataset)}")
903
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
904
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
905
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
906
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
907
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
908
+ global_step = 0
909
+ first_epoch = 0
910
+
911
+ # Potentially load in the weights and states from a previous save
912
+ if args.resume_from_checkpoint:
913
+ if args.resume_from_checkpoint != "latest":
914
+ path = os.path.basename(args.resume_from_checkpoint)
915
+ else:
916
+ # Get the most recent checkpoint
917
+ dirs = os.listdir(args.output_dir)
918
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
919
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
920
+ path = dirs[-1] if len(dirs) > 0 else None
921
+
922
+ if path is None:
923
+ accelerator.print(
924
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
925
+ )
926
+ args.resume_from_checkpoint = None
927
+ initial_global_step = 0
928
+ else:
929
+ accelerator.print(f"Resuming from checkpoint {path}")
930
+ accelerator.load_state(os.path.join(args.output_dir, path))
931
+ global_step = int(path.split("-")[1])
932
+
933
+ initial_global_step = global_step
934
+ first_epoch = global_step // num_update_steps_per_epoch
935
+
936
+ else:
937
+ initial_global_step = 0
938
+
939
+ progress_bar = tqdm(
940
+ range(0, args.max_train_steps),
941
+ initial=initial_global_step,
942
+ desc="Steps",
943
+ # Only show the progress bar once on each machine.
944
+ disable=not accelerator.is_local_main_process,
945
+ )
946
+
947
+ for epoch in range(first_epoch, args.num_train_epochs):
948
+ train_loss = 0.0
949
+ for step, batch in enumerate(train_dataloader):
950
+ with accelerator.accumulate(unet):
951
+ # Convert images to latent space
952
+ latents = vae.encode(batch["pixel_values"].to(weight_dtype)).latent_dist.sample()
953
+ latents = latents * vae.config.scaling_factor
954
+
955
+ # Sample noise that we'll add to the latents
956
+ noise = torch.randn_like(latents)
957
+ if args.noise_offset:
958
+ # https://www.crosslabs.org//blog/diffusion-with-offset-noise
959
+ noise += args.noise_offset * torch.randn(
960
+ (latents.shape[0], latents.shape[1], 1, 1), device=latents.device
961
+ )
962
+ if args.input_perturbation:
963
+ new_noise = noise + args.input_perturbation * torch.randn_like(noise)
964
+ bsz = latents.shape[0]
965
+ # Sample a random timestep for each image
966
+ timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
967
+ timesteps = timesteps.long()
968
+
969
+ # Add noise to the latents according to the noise magnitude at each timestep
970
+ # (this is the forward diffusion process)
971
+ if args.input_perturbation:
972
+ noisy_latents = noise_scheduler.add_noise(latents, new_noise, timesteps)
973
+ else:
974
+ noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
975
+
976
+ # Get the text embedding for conditioning
977
+ encoder_hidden_states = text_encoder(batch["input_ids"], return_dict=False)[0]
978
+
979
+ # Get the target for loss depending on the prediction type
980
+ if args.prediction_type is not None:
981
+ # set prediction_type of scheduler if defined
982
+ noise_scheduler.register_to_config(prediction_type=args.prediction_type)
983
+
984
+ if noise_scheduler.config.prediction_type == "epsilon":
985
+ target = noise
986
+ elif noise_scheduler.config.prediction_type == "v_prediction":
987
+ target = noise_scheduler.get_velocity(latents, noise, timesteps)
988
+ else:
989
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
990
+
991
+ if args.dream_training:
992
+ noisy_latents, target = compute_dream_and_update_latents(
993
+ unet,
994
+ noise_scheduler,
995
+ timesteps,
996
+ noise,
997
+ noisy_latents,
998
+ target,
999
+ encoder_hidden_states,
1000
+ args.dream_detail_preservation,
1001
+ )
1002
+
1003
+ # Predict the noise residual and compute loss
1004
+ model_pred = unet(noisy_latents, timesteps, encoder_hidden_states, return_dict=False)[0]
1005
+
1006
+ if args.snr_gamma is None:
1007
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
1008
+ else:
1009
+ # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556.
1010
+ # Since we predict the noise instead of x_0, the original formulation is slightly changed.
1011
+ # This is discussed in Section 4.2 of the same paper.
1012
+ snr = compute_snr(noise_scheduler, timesteps)
1013
+ mse_loss_weights = torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(
1014
+ dim=1
1015
+ )[0]
1016
+ if noise_scheduler.config.prediction_type == "epsilon":
1017
+ mse_loss_weights = mse_loss_weights / snr
1018
+ elif noise_scheduler.config.prediction_type == "v_prediction":
1019
+ mse_loss_weights = mse_loss_weights / (snr + 1)
1020
+
1021
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
1022
+ loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights
1023
+ loss = loss.mean()
1024
+
1025
+ # Gather the losses across all processes for logging (if we use distributed training).
1026
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
1027
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
1028
+
1029
+ # Backpropagate
1030
+ accelerator.backward(loss)
1031
+ if accelerator.sync_gradients:
1032
+ accelerator.clip_grad_norm_(unet.parameters(), args.max_grad_norm)
1033
+ optimizer.step()
1034
+ lr_scheduler.step()
1035
+ optimizer.zero_grad()
1036
+
1037
+ # Checks if the accelerator has performed an optimization step behind the scenes
1038
+ if accelerator.sync_gradients:
1039
+ if args.use_ema:
1040
+ if args.offload_ema:
1041
+ ema_unet.to(device="cuda", non_blocking=True)
1042
+ ema_unet.step(unet.parameters())
1043
+ if args.offload_ema:
1044
+ ema_unet.to(device="cpu", non_blocking=True)
1045
+ progress_bar.update(1)
1046
+ global_step += 1
1047
+ accelerator.log({"train_loss": train_loss}, step=global_step)
1048
+ train_loss = 0.0
1049
+
1050
+ if global_step % args.checkpointing_steps == 0:
1051
+ if accelerator.is_main_process:
1052
+ # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
1053
+ if args.checkpoints_total_limit is not None:
1054
+ checkpoints = os.listdir(args.output_dir)
1055
+ checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
1056
+ checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
1057
+
1058
+ # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
1059
+ if len(checkpoints) >= args.checkpoints_total_limit:
1060
+ num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
1061
+ removing_checkpoints = checkpoints[0:num_to_remove]
1062
+
1063
+ logger.info(
1064
+ f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
1065
+ )
1066
+ logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
1067
+
1068
+ for removing_checkpoint in removing_checkpoints:
1069
+ removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
1070
+ shutil.rmtree(removing_checkpoint)
1071
+
1072
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
1073
+ accelerator.save_state(save_path)
1074
+ logger.info(f"Saved state to {save_path}")
1075
+
1076
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
1077
+ progress_bar.set_postfix(**logs)
1078
+
1079
+ if global_step >= args.max_train_steps:
1080
+ break
1081
+
1082
+ if accelerator.is_main_process:
1083
+ if args.validation_prompts is not None and epoch % args.validation_epochs == 0:
1084
+ if args.use_ema:
1085
+ # Store the UNet parameters temporarily and load the EMA parameters to perform inference.
1086
+ ema_unet.store(unet.parameters())
1087
+ ema_unet.copy_to(unet.parameters())
1088
+ log_validation(
1089
+ vae,
1090
+ text_encoder,
1091
+ tokenizer,
1092
+ unet,
1093
+ args,
1094
+ accelerator,
1095
+ weight_dtype,
1096
+ global_step,
1097
+ )
1098
+ if args.use_ema:
1099
+ # Switch back to the original UNet parameters.
1100
+ ema_unet.restore(unet.parameters())
1101
+
1102
+ # Create the pipeline using the trained modules and save it.
1103
+ accelerator.wait_for_everyone()
1104
+ if accelerator.is_main_process:
1105
+ unet = unwrap_model(unet)
1106
+ if args.use_ema:
1107
+ ema_unet.copy_to(unet.parameters())
1108
+
1109
+ pipeline = StableDiffusionPipeline.from_pretrained(
1110
+ args.pretrained_model_name_or_path,
1111
+ text_encoder=text_encoder,
1112
+ vae=vae,
1113
+ unet=unet,
1114
+ revision=args.revision,
1115
+ variant=args.variant,
1116
+ )
1117
+ pipeline.save_pretrained(args.output_dir)
1118
+
1119
+ # Run a final round of inference.
1120
+ images = []
1121
+ if args.validation_prompts is not None:
1122
+ logger.info("Running inference for collecting generated images...")
1123
+ pipeline = pipeline.to(accelerator.device)
1124
+ pipeline.torch_dtype = weight_dtype
1125
+ pipeline.set_progress_bar_config(disable=True)
1126
+
1127
+ if args.enable_xformers_memory_efficient_attention:
1128
+ pipeline.enable_xformers_memory_efficient_attention()
1129
+
1130
+ if args.seed is None:
1131
+ generator = None
1132
+ else:
1133
+ generator = torch.Generator(device=accelerator.device).manual_seed(args.seed)
1134
+
1135
+ for i in range(len(args.validation_prompts)):
1136
+ with torch.autocast("cuda"):
1137
+ image = pipeline(args.validation_prompts[i], num_inference_steps=20, generator=generator).images[0]
1138
+ images.append(image)
1139
+
1140
+ if args.push_to_hub:
1141
+ save_model_card(args, repo_id, images, repo_folder=args.output_dir)
1142
+ upload_folder(
1143
+ repo_id=repo_id,
1144
+ folder_path=args.output_dir,
1145
+ commit_message="End of training",
1146
+ ignore_patterns=["step_*", "epoch_*"],
1147
+ )
1148
+
1149
+ accelerator.end_training()
1150
+
1151
+
1152
+ if __name__ == "__main__":
1153
+ main()
train_text_to_image_flax.py ADDED
@@ -0,0 +1,620 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ import argparse
18
+ import logging
19
+ import math
20
+ import os
21
+ import random
22
+ from pathlib import Path
23
+
24
+ import jax
25
+ import jax.numpy as jnp
26
+ import numpy as np
27
+ import optax
28
+ import torch
29
+ import torch.utils.checkpoint
30
+ import transformers
31
+ from datasets import load_dataset
32
+ from flax import jax_utils
33
+ from flax.training import train_state
34
+ from flax.training.common_utils import shard
35
+ from huggingface_hub import create_repo, upload_folder
36
+ from torchvision import transforms
37
+ from tqdm.auto import tqdm
38
+ from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel, set_seed
39
+
40
+ from diffusers import (
41
+ FlaxAutoencoderKL,
42
+ FlaxDDPMScheduler,
43
+ FlaxPNDMScheduler,
44
+ FlaxStableDiffusionPipeline,
45
+ FlaxUNet2DConditionModel,
46
+ )
47
+ from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker
48
+ from diffusers.utils import check_min_version
49
+
50
+
51
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
52
+ check_min_version("0.33.0.dev0")
53
+
54
+ logger = logging.getLogger(__name__)
55
+
56
+
57
+ def parse_args():
58
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
59
+ parser.add_argument(
60
+ "--pretrained_model_name_or_path",
61
+ type=str,
62
+ default=None,
63
+ required=True,
64
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
65
+ )
66
+ parser.add_argument(
67
+ "--revision",
68
+ type=str,
69
+ default=None,
70
+ required=False,
71
+ help="Revision of pretrained model identifier from huggingface.co/models.",
72
+ )
73
+ parser.add_argument(
74
+ "--variant",
75
+ type=str,
76
+ default=None,
77
+ help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16",
78
+ )
79
+ parser.add_argument(
80
+ "--dataset_name",
81
+ type=str,
82
+ default=None,
83
+ help=(
84
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
85
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
86
+ " or to a folder containing files that 🤗 Datasets can understand."
87
+ ),
88
+ )
89
+ parser.add_argument(
90
+ "--dataset_config_name",
91
+ type=str,
92
+ default=None,
93
+ help="The config of the Dataset, leave as None if there's only one config.",
94
+ )
95
+ parser.add_argument(
96
+ "--train_data_dir",
97
+ type=str,
98
+ default=None,
99
+ help=(
100
+ "A folder containing the training data. Folder contents must follow the structure described in"
101
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
102
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
103
+ ),
104
+ )
105
+ parser.add_argument(
106
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
107
+ )
108
+ parser.add_argument(
109
+ "--caption_column",
110
+ type=str,
111
+ default="text",
112
+ help="The column of the dataset containing a caption or a list of captions.",
113
+ )
114
+ parser.add_argument(
115
+ "--max_train_samples",
116
+ type=int,
117
+ default=None,
118
+ help=(
119
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
120
+ "value if set."
121
+ ),
122
+ )
123
+ parser.add_argument(
124
+ "--output_dir",
125
+ type=str,
126
+ default="sd-model-finetuned",
127
+ help="The output directory where the model predictions and checkpoints will be written.",
128
+ )
129
+ parser.add_argument(
130
+ "--cache_dir",
131
+ type=str,
132
+ default=None,
133
+ help="The directory where the downloaded models and datasets will be stored.",
134
+ )
135
+ parser.add_argument("--seed", type=int, default=0, help="A seed for reproducible training.")
136
+ parser.add_argument(
137
+ "--resolution",
138
+ type=int,
139
+ default=512,
140
+ help=(
141
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
142
+ " resolution"
143
+ ),
144
+ )
145
+ parser.add_argument(
146
+ "--center_crop",
147
+ default=False,
148
+ action="store_true",
149
+ help=(
150
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
151
+ " cropped. The images will be resized to the resolution first before cropping."
152
+ ),
153
+ )
154
+ parser.add_argument(
155
+ "--random_flip",
156
+ action="store_true",
157
+ help="whether to randomly flip images horizontally",
158
+ )
159
+ parser.add_argument(
160
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
161
+ )
162
+ parser.add_argument("--num_train_epochs", type=int, default=100)
163
+ parser.add_argument(
164
+ "--max_train_steps",
165
+ type=int,
166
+ default=None,
167
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
168
+ )
169
+ parser.add_argument(
170
+ "--learning_rate",
171
+ type=float,
172
+ default=1e-4,
173
+ help="Initial learning rate (after the potential warmup period) to use.",
174
+ )
175
+ parser.add_argument(
176
+ "--scale_lr",
177
+ action="store_true",
178
+ default=False,
179
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
180
+ )
181
+ parser.add_argument(
182
+ "--lr_scheduler",
183
+ type=str,
184
+ default="constant",
185
+ help=(
186
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
187
+ ' "constant", "constant_with_warmup"]'
188
+ ),
189
+ )
190
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
191
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
192
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
193
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
194
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
195
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
196
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
197
+ parser.add_argument(
198
+ "--hub_model_id",
199
+ type=str,
200
+ default=None,
201
+ help="The name of the repository to keep in sync with the local `output_dir`.",
202
+ )
203
+ parser.add_argument(
204
+ "--logging_dir",
205
+ type=str,
206
+ default="logs",
207
+ help=(
208
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
209
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
210
+ ),
211
+ )
212
+ parser.add_argument(
213
+ "--report_to",
214
+ type=str,
215
+ default="tensorboard",
216
+ help=(
217
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
218
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
219
+ ),
220
+ )
221
+ parser.add_argument(
222
+ "--mixed_precision",
223
+ type=str,
224
+ default="no",
225
+ choices=["no", "fp16", "bf16"],
226
+ help=(
227
+ "Whether to use mixed precision. Choose"
228
+ "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
229
+ "and an Nvidia Ampere GPU."
230
+ ),
231
+ )
232
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
233
+ parser.add_argument(
234
+ "--from_pt",
235
+ action="store_true",
236
+ default=False,
237
+ help="Flag to indicate whether to convert models from PyTorch.",
238
+ )
239
+
240
+ args = parser.parse_args()
241
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
242
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
243
+ args.local_rank = env_local_rank
244
+
245
+ # Sanity checks
246
+ if args.dataset_name is None and args.train_data_dir is None:
247
+ raise ValueError("Need either a dataset name or a training folder.")
248
+
249
+ return args
250
+
251
+
252
+ dataset_name_mapping = {
253
+ "lambdalabs/naruto-blip-captions": ("image", "text"),
254
+ }
255
+
256
+
257
+ def get_params_to_save(params):
258
+ return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params))
259
+
260
+
261
+ def main():
262
+ args = parse_args()
263
+
264
+ if args.report_to == "wandb" and args.hub_token is not None:
265
+ raise ValueError(
266
+ "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
267
+ " Please use `huggingface-cli login` to authenticate with the Hub."
268
+ )
269
+
270
+ logging.basicConfig(
271
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
272
+ datefmt="%m/%d/%Y %H:%M:%S",
273
+ level=logging.INFO,
274
+ )
275
+ # Setup logging, we only want one process per machine to log things on the screen.
276
+ logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR)
277
+ if jax.process_index() == 0:
278
+ transformers.utils.logging.set_verbosity_info()
279
+ else:
280
+ transformers.utils.logging.set_verbosity_error()
281
+
282
+ if args.seed is not None:
283
+ set_seed(args.seed)
284
+
285
+ # Handle the repository creation
286
+ if jax.process_index() == 0:
287
+ if args.output_dir is not None:
288
+ os.makedirs(args.output_dir, exist_ok=True)
289
+
290
+ if args.push_to_hub:
291
+ repo_id = create_repo(
292
+ repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
293
+ ).repo_id
294
+
295
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
296
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
297
+
298
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
299
+ # download the dataset.
300
+ if args.dataset_name is not None:
301
+ # Downloading and loading a dataset from the hub.
302
+ dataset = load_dataset(
303
+ args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir
304
+ )
305
+ else:
306
+ data_files = {}
307
+ if args.train_data_dir is not None:
308
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
309
+ dataset = load_dataset(
310
+ "imagefolder",
311
+ data_files=data_files,
312
+ cache_dir=args.cache_dir,
313
+ )
314
+ # See more about loading custom images at
315
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
316
+
317
+ # Preprocessing the datasets.
318
+ # We need to tokenize inputs and targets.
319
+ column_names = dataset["train"].column_names
320
+
321
+ # 6. Get the column names for input/target.
322
+ dataset_columns = dataset_name_mapping.get(args.dataset_name, None)
323
+ if args.image_column is None:
324
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
325
+ else:
326
+ image_column = args.image_column
327
+ if image_column not in column_names:
328
+ raise ValueError(
329
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
330
+ )
331
+ if args.caption_column is None:
332
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
333
+ else:
334
+ caption_column = args.caption_column
335
+ if caption_column not in column_names:
336
+ raise ValueError(
337
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
338
+ )
339
+
340
+ # Preprocessing the datasets.
341
+ # We need to tokenize input captions and transform the images.
342
+ def tokenize_captions(examples, is_train=True):
343
+ captions = []
344
+ for caption in examples[caption_column]:
345
+ if isinstance(caption, str):
346
+ captions.append(caption)
347
+ elif isinstance(caption, (list, np.ndarray)):
348
+ # take a random caption if there are multiple
349
+ captions.append(random.choice(caption) if is_train else caption[0])
350
+ else:
351
+ raise ValueError(
352
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
353
+ )
354
+ inputs = tokenizer(captions, max_length=tokenizer.model_max_length, padding="do_not_pad", truncation=True)
355
+ input_ids = inputs.input_ids
356
+ return input_ids
357
+
358
+ train_transforms = transforms.Compose(
359
+ [
360
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
361
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
362
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
363
+ transforms.ToTensor(),
364
+ transforms.Normalize([0.5], [0.5]),
365
+ ]
366
+ )
367
+
368
+ def preprocess_train(examples):
369
+ images = [image.convert("RGB") for image in examples[image_column]]
370
+ examples["pixel_values"] = [train_transforms(image) for image in images]
371
+ examples["input_ids"] = tokenize_captions(examples)
372
+
373
+ return examples
374
+
375
+ if args.max_train_samples is not None:
376
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
377
+ # Set the training transforms
378
+ train_dataset = dataset["train"].with_transform(preprocess_train)
379
+
380
+ def collate_fn(examples):
381
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
382
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
383
+ input_ids = [example["input_ids"] for example in examples]
384
+
385
+ padded_tokens = tokenizer.pad(
386
+ {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt"
387
+ )
388
+ batch = {
389
+ "pixel_values": pixel_values,
390
+ "input_ids": padded_tokens.input_ids,
391
+ }
392
+ batch = {k: v.numpy() for k, v in batch.items()}
393
+
394
+ return batch
395
+
396
+ total_train_batch_size = args.train_batch_size * jax.local_device_count()
397
+ train_dataloader = torch.utils.data.DataLoader(
398
+ train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=total_train_batch_size, drop_last=True
399
+ )
400
+
401
+ weight_dtype = jnp.float32
402
+ if args.mixed_precision == "fp16":
403
+ weight_dtype = jnp.float16
404
+ elif args.mixed_precision == "bf16":
405
+ weight_dtype = jnp.bfloat16
406
+
407
+ # Load models and create wrapper for stable diffusion
408
+ tokenizer = CLIPTokenizer.from_pretrained(
409
+ args.pretrained_model_name_or_path,
410
+ from_pt=args.from_pt,
411
+ revision=args.revision,
412
+ subfolder="tokenizer",
413
+ )
414
+ text_encoder = FlaxCLIPTextModel.from_pretrained(
415
+ args.pretrained_model_name_or_path,
416
+ from_pt=args.from_pt,
417
+ revision=args.revision,
418
+ subfolder="text_encoder",
419
+ dtype=weight_dtype,
420
+ )
421
+ vae, vae_params = FlaxAutoencoderKL.from_pretrained(
422
+ args.pretrained_model_name_or_path,
423
+ from_pt=args.from_pt,
424
+ revision=args.revision,
425
+ subfolder="vae",
426
+ dtype=weight_dtype,
427
+ )
428
+ unet, unet_params = FlaxUNet2DConditionModel.from_pretrained(
429
+ args.pretrained_model_name_or_path,
430
+ from_pt=args.from_pt,
431
+ revision=args.revision,
432
+ subfolder="unet",
433
+ dtype=weight_dtype,
434
+ )
435
+
436
+ # Optimization
437
+ if args.scale_lr:
438
+ args.learning_rate = args.learning_rate * total_train_batch_size
439
+
440
+ constant_scheduler = optax.constant_schedule(args.learning_rate)
441
+
442
+ adamw = optax.adamw(
443
+ learning_rate=constant_scheduler,
444
+ b1=args.adam_beta1,
445
+ b2=args.adam_beta2,
446
+ eps=args.adam_epsilon,
447
+ weight_decay=args.adam_weight_decay,
448
+ )
449
+
450
+ optimizer = optax.chain(
451
+ optax.clip_by_global_norm(args.max_grad_norm),
452
+ adamw,
453
+ )
454
+
455
+ state = train_state.TrainState.create(apply_fn=unet.__call__, params=unet_params, tx=optimizer)
456
+
457
+ noise_scheduler = FlaxDDPMScheduler(
458
+ beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000
459
+ )
460
+ noise_scheduler_state = noise_scheduler.create_state()
461
+
462
+ # Initialize our training
463
+ rng = jax.random.PRNGKey(args.seed)
464
+ train_rngs = jax.random.split(rng, jax.local_device_count())
465
+
466
+ def train_step(state, text_encoder_params, vae_params, batch, train_rng):
467
+ dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3)
468
+
469
+ def compute_loss(params):
470
+ # Convert images to latent space
471
+ vae_outputs = vae.apply(
472
+ {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode
473
+ )
474
+ latents = vae_outputs.latent_dist.sample(sample_rng)
475
+ # (NHWC) -> (NCHW)
476
+ latents = jnp.transpose(latents, (0, 3, 1, 2))
477
+ latents = latents * vae.config.scaling_factor
478
+
479
+ # Sample noise that we'll add to the latents
480
+ noise_rng, timestep_rng = jax.random.split(sample_rng)
481
+ noise = jax.random.normal(noise_rng, latents.shape)
482
+ # Sample a random timestep for each image
483
+ bsz = latents.shape[0]
484
+ timesteps = jax.random.randint(
485
+ timestep_rng,
486
+ (bsz,),
487
+ 0,
488
+ noise_scheduler.config.num_train_timesteps,
489
+ )
490
+
491
+ # Add noise to the latents according to the noise magnitude at each timestep
492
+ # (this is the forward diffusion process)
493
+ noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps)
494
+
495
+ # Get the text embedding for conditioning
496
+ encoder_hidden_states = text_encoder(
497
+ batch["input_ids"],
498
+ params=text_encoder_params,
499
+ train=False,
500
+ )[0]
501
+
502
+ # Predict the noise residual and compute loss
503
+ model_pred = unet.apply(
504
+ {"params": params}, noisy_latents, timesteps, encoder_hidden_states, train=True
505
+ ).sample
506
+
507
+ # Get the target for loss depending on the prediction type
508
+ if noise_scheduler.config.prediction_type == "epsilon":
509
+ target = noise
510
+ elif noise_scheduler.config.prediction_type == "v_prediction":
511
+ target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps)
512
+ else:
513
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
514
+
515
+ loss = (target - model_pred) ** 2
516
+ loss = loss.mean()
517
+
518
+ return loss
519
+
520
+ grad_fn = jax.value_and_grad(compute_loss)
521
+ loss, grad = grad_fn(state.params)
522
+ grad = jax.lax.pmean(grad, "batch")
523
+
524
+ new_state = state.apply_gradients(grads=grad)
525
+
526
+ metrics = {"loss": loss}
527
+ metrics = jax.lax.pmean(metrics, axis_name="batch")
528
+
529
+ return new_state, metrics, new_train_rng
530
+
531
+ # Create parallel version of the train step
532
+ p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,))
533
+
534
+ # Replicate the train state on each device
535
+ state = jax_utils.replicate(state)
536
+ text_encoder_params = jax_utils.replicate(text_encoder.params)
537
+ vae_params = jax_utils.replicate(vae_params)
538
+
539
+ # Train!
540
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader))
541
+
542
+ # Scheduler and math around the number of training steps.
543
+ if args.max_train_steps is None:
544
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
545
+
546
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
547
+
548
+ logger.info("***** Running training *****")
549
+ logger.info(f" Num examples = {len(train_dataset)}")
550
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
551
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
552
+ logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}")
553
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
554
+
555
+ global_step = 0
556
+
557
+ epochs = tqdm(range(args.num_train_epochs), desc="Epoch ... ", position=0)
558
+ for epoch in epochs:
559
+ # ======================== Training ================================
560
+
561
+ train_metrics = []
562
+
563
+ steps_per_epoch = len(train_dataset) // total_train_batch_size
564
+ train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False)
565
+ # train
566
+ for batch in train_dataloader:
567
+ batch = shard(batch)
568
+ state, train_metric, train_rngs = p_train_step(state, text_encoder_params, vae_params, batch, train_rngs)
569
+ train_metrics.append(train_metric)
570
+
571
+ train_step_progress_bar.update(1)
572
+
573
+ global_step += 1
574
+ if global_step >= args.max_train_steps:
575
+ break
576
+
577
+ train_metric = jax_utils.unreplicate(train_metric)
578
+
579
+ train_step_progress_bar.close()
580
+ epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})")
581
+
582
+ # Create the pipeline using using the trained modules and save it.
583
+ if jax.process_index() == 0:
584
+ scheduler = FlaxPNDMScheduler(
585
+ beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True
586
+ )
587
+ safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained(
588
+ "CompVis/stable-diffusion-safety-checker", from_pt=True
589
+ )
590
+ pipeline = FlaxStableDiffusionPipeline(
591
+ text_encoder=text_encoder,
592
+ vae=vae,
593
+ unet=unet,
594
+ tokenizer=tokenizer,
595
+ scheduler=scheduler,
596
+ safety_checker=safety_checker,
597
+ feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"),
598
+ )
599
+
600
+ pipeline.save_pretrained(
601
+ args.output_dir,
602
+ params={
603
+ "text_encoder": get_params_to_save(text_encoder_params),
604
+ "vae": get_params_to_save(vae_params),
605
+ "unet": get_params_to_save(state.params),
606
+ "safety_checker": safety_checker.params,
607
+ },
608
+ )
609
+
610
+ if args.push_to_hub:
611
+ upload_folder(
612
+ repo_id=repo_id,
613
+ folder_path=args.output_dir,
614
+ commit_message="End of training",
615
+ ignore_patterns=["step_*", "epoch_*"],
616
+ )
617
+
618
+
619
+ if __name__ == "__main__":
620
+ main()
train_text_to_image_lora.py ADDED
@@ -0,0 +1,975 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Fine-tuning script for Stable Diffusion for text2image with support for LoRA."""
17
+
18
+ import argparse
19
+ import logging
20
+ import math
21
+ import os
22
+ import random
23
+ import shutil
24
+ from contextlib import nullcontext
25
+ from pathlib import Path
26
+
27
+ import datasets
28
+ import numpy as np
29
+ import torch
30
+ import torch.nn.functional as F
31
+ import torch.utils.checkpoint
32
+ import transformers
33
+ from accelerate import Accelerator
34
+ from accelerate.logging import get_logger
35
+ from accelerate.utils import ProjectConfiguration, set_seed
36
+ from datasets import load_dataset
37
+ from huggingface_hub import create_repo, upload_folder
38
+ from packaging import version
39
+ from peft import LoraConfig
40
+ from peft.utils import get_peft_model_state_dict
41
+ from torchvision import transforms
42
+ from tqdm.auto import tqdm
43
+ from transformers import CLIPTextModel, CLIPTokenizer
44
+
45
+ import diffusers
46
+ from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, StableDiffusionPipeline, UNet2DConditionModel
47
+ from diffusers.optimization import get_scheduler
48
+ from diffusers.training_utils import cast_training_params, compute_snr
49
+ from diffusers.utils import check_min_version, convert_state_dict_to_diffusers, is_wandb_available
50
+ from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
51
+ from diffusers.utils.import_utils import is_xformers_available
52
+ from diffusers.utils.torch_utils import is_compiled_module
53
+
54
+
55
+ if is_wandb_available():
56
+ import wandb
57
+
58
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
59
+ check_min_version("0.33.0.dev0")
60
+
61
+ logger = get_logger(__name__, log_level="INFO")
62
+
63
+
64
+ def save_model_card(
65
+ repo_id: str,
66
+ images: list = None,
67
+ base_model: str = None,
68
+ dataset_name: str = None,
69
+ repo_folder: str = None,
70
+ ):
71
+ img_str = ""
72
+ if images is not None:
73
+ for i, image in enumerate(images):
74
+ image.save(os.path.join(repo_folder, f"image_{i}.png"))
75
+ img_str += f"![img_{i}](./image_{i}.png)\n"
76
+
77
+ model_description = f"""
78
+ # LoRA text2image fine-tuning - {repo_id}
79
+ These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n
80
+ {img_str}
81
+ """
82
+
83
+ model_card = load_or_create_model_card(
84
+ repo_id_or_path=repo_id,
85
+ from_training=True,
86
+ license="creativeml-openrail-m",
87
+ base_model=base_model,
88
+ model_description=model_description,
89
+ inference=True,
90
+ )
91
+
92
+ tags = [
93
+ "stable-diffusion",
94
+ "stable-diffusion-diffusers",
95
+ "text-to-image",
96
+ "diffusers",
97
+ "diffusers-training",
98
+ "lora",
99
+ ]
100
+ model_card = populate_model_card(model_card, tags=tags)
101
+
102
+ model_card.save(os.path.join(repo_folder, "README.md"))
103
+
104
+
105
+ def log_validation(
106
+ pipeline,
107
+ args,
108
+ accelerator,
109
+ epoch,
110
+ is_final_validation=False,
111
+ ):
112
+ logger.info(
113
+ f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
114
+ f" {args.validation_prompt}."
115
+ )
116
+ pipeline = pipeline.to(accelerator.device)
117
+ pipeline.set_progress_bar_config(disable=True)
118
+ generator = torch.Generator(device=accelerator.device)
119
+ if args.seed is not None:
120
+ generator = generator.manual_seed(args.seed)
121
+ images = []
122
+ if torch.backends.mps.is_available():
123
+ autocast_ctx = nullcontext()
124
+ else:
125
+ autocast_ctx = torch.autocast(accelerator.device.type)
126
+
127
+ with autocast_ctx:
128
+ for _ in range(args.num_validation_images):
129
+ images.append(pipeline(args.validation_prompt, num_inference_steps=30, generator=generator).images[0])
130
+
131
+ for tracker in accelerator.trackers:
132
+ phase_name = "test" if is_final_validation else "validation"
133
+ if tracker.name == "tensorboard":
134
+ np_images = np.stack([np.asarray(img) for img in images])
135
+ tracker.writer.add_images(phase_name, np_images, epoch, dataformats="NHWC")
136
+ if tracker.name == "wandb":
137
+ tracker.log(
138
+ {
139
+ phase_name: [
140
+ wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images)
141
+ ]
142
+ }
143
+ )
144
+ return images
145
+
146
+
147
+ def parse_args():
148
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
149
+ parser.add_argument(
150
+ "--pretrained_model_name_or_path",
151
+ type=str,
152
+ default=None,
153
+ required=True,
154
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
155
+ )
156
+ parser.add_argument(
157
+ "--revision",
158
+ type=str,
159
+ default=None,
160
+ required=False,
161
+ help="Revision of pretrained model identifier from huggingface.co/models.",
162
+ )
163
+ parser.add_argument(
164
+ "--variant",
165
+ type=str,
166
+ default=None,
167
+ help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16",
168
+ )
169
+ parser.add_argument(
170
+ "--dataset_name",
171
+ type=str,
172
+ default=None,
173
+ help=(
174
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
175
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
176
+ " or to a folder containing files that 🤗 Datasets can understand."
177
+ ),
178
+ )
179
+ parser.add_argument(
180
+ "--dataset_config_name",
181
+ type=str,
182
+ default=None,
183
+ help="The config of the Dataset, leave as None if there's only one config.",
184
+ )
185
+ parser.add_argument(
186
+ "--train_data_dir",
187
+ type=str,
188
+ default=None,
189
+ help=(
190
+ "A folder containing the training data. Folder contents must follow the structure described in"
191
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
192
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
193
+ ),
194
+ )
195
+ parser.add_argument(
196
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
197
+ )
198
+ parser.add_argument(
199
+ "--caption_column",
200
+ type=str,
201
+ default="text",
202
+ help="The column of the dataset containing a caption or a list of captions.",
203
+ )
204
+ parser.add_argument(
205
+ "--validation_prompt", type=str, default=None, help="A prompt that is sampled during training for inference."
206
+ )
207
+ parser.add_argument(
208
+ "--num_validation_images",
209
+ type=int,
210
+ default=4,
211
+ help="Number of images that should be generated during validation with `validation_prompt`.",
212
+ )
213
+ parser.add_argument(
214
+ "--validation_epochs",
215
+ type=int,
216
+ default=1,
217
+ help=(
218
+ "Run fine-tuning validation every X epochs. The validation process consists of running the prompt"
219
+ " `args.validation_prompt` multiple times: `args.num_validation_images`."
220
+ ),
221
+ )
222
+ parser.add_argument(
223
+ "--max_train_samples",
224
+ type=int,
225
+ default=None,
226
+ help=(
227
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
228
+ "value if set."
229
+ ),
230
+ )
231
+ parser.add_argument(
232
+ "--output_dir",
233
+ type=str,
234
+ default="sd-model-finetuned-lora",
235
+ help="The output directory where the model predictions and checkpoints will be written.",
236
+ )
237
+ parser.add_argument(
238
+ "--cache_dir",
239
+ type=str,
240
+ default=None,
241
+ help="The directory where the downloaded models and datasets will be stored.",
242
+ )
243
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
244
+ parser.add_argument(
245
+ "--resolution",
246
+ type=int,
247
+ default=512,
248
+ help=(
249
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
250
+ " resolution"
251
+ ),
252
+ )
253
+ parser.add_argument(
254
+ "--center_crop",
255
+ default=False,
256
+ action="store_true",
257
+ help=(
258
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
259
+ " cropped. The images will be resized to the resolution first before cropping."
260
+ ),
261
+ )
262
+ parser.add_argument(
263
+ "--random_flip",
264
+ action="store_true",
265
+ help="whether to randomly flip images horizontally",
266
+ )
267
+ parser.add_argument(
268
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
269
+ )
270
+ parser.add_argument("--num_train_epochs", type=int, default=100)
271
+ parser.add_argument(
272
+ "--max_train_steps",
273
+ type=int,
274
+ default=None,
275
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
276
+ )
277
+ parser.add_argument(
278
+ "--gradient_accumulation_steps",
279
+ type=int,
280
+ default=1,
281
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
282
+ )
283
+ parser.add_argument(
284
+ "--gradient_checkpointing",
285
+ action="store_true",
286
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
287
+ )
288
+ parser.add_argument(
289
+ "--learning_rate",
290
+ type=float,
291
+ default=1e-4,
292
+ help="Initial learning rate (after the potential warmup period) to use.",
293
+ )
294
+ parser.add_argument(
295
+ "--scale_lr",
296
+ action="store_true",
297
+ default=False,
298
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
299
+ )
300
+ parser.add_argument(
301
+ "--lr_scheduler",
302
+ type=str,
303
+ default="constant",
304
+ help=(
305
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
306
+ ' "constant", "constant_with_warmup"]'
307
+ ),
308
+ )
309
+ parser.add_argument(
310
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
311
+ )
312
+ parser.add_argument(
313
+ "--snr_gamma",
314
+ type=float,
315
+ default=None,
316
+ help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. "
317
+ "More details here: https://arxiv.org/abs/2303.09556.",
318
+ )
319
+ parser.add_argument(
320
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
321
+ )
322
+ parser.add_argument(
323
+ "--allow_tf32",
324
+ action="store_true",
325
+ help=(
326
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
327
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
328
+ ),
329
+ )
330
+ parser.add_argument(
331
+ "--dataloader_num_workers",
332
+ type=int,
333
+ default=0,
334
+ help=(
335
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
336
+ ),
337
+ )
338
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
339
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
340
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
341
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
342
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
343
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
344
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
345
+ parser.add_argument(
346
+ "--prediction_type",
347
+ type=str,
348
+ default=None,
349
+ help="The prediction_type that shall be used for training. Choose between 'epsilon' or 'v_prediction' or leave `None`. If left to `None` the default prediction type of the scheduler: `noise_scheduler.config.prediction_type` is chosen.",
350
+ )
351
+ parser.add_argument(
352
+ "--hub_model_id",
353
+ type=str,
354
+ default=None,
355
+ help="The name of the repository to keep in sync with the local `output_dir`.",
356
+ )
357
+ parser.add_argument(
358
+ "--logging_dir",
359
+ type=str,
360
+ default="logs",
361
+ help=(
362
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
363
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
364
+ ),
365
+ )
366
+ parser.add_argument(
367
+ "--mixed_precision",
368
+ type=str,
369
+ default=None,
370
+ choices=["no", "fp16", "bf16"],
371
+ help=(
372
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
373
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
374
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
375
+ ),
376
+ )
377
+ parser.add_argument(
378
+ "--report_to",
379
+ type=str,
380
+ default="tensorboard",
381
+ help=(
382
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
383
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
384
+ ),
385
+ )
386
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
387
+ parser.add_argument(
388
+ "--checkpointing_steps",
389
+ type=int,
390
+ default=500,
391
+ help=(
392
+ "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming"
393
+ " training using `--resume_from_checkpoint`."
394
+ ),
395
+ )
396
+ parser.add_argument(
397
+ "--checkpoints_total_limit",
398
+ type=int,
399
+ default=None,
400
+ help=("Max number of checkpoints to store."),
401
+ )
402
+ parser.add_argument(
403
+ "--resume_from_checkpoint",
404
+ type=str,
405
+ default=None,
406
+ help=(
407
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
408
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
409
+ ),
410
+ )
411
+ parser.add_argument(
412
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
413
+ )
414
+ parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.")
415
+ parser.add_argument(
416
+ "--rank",
417
+ type=int,
418
+ default=4,
419
+ help=("The dimension of the LoRA update matrices."),
420
+ )
421
+
422
+ args = parser.parse_args()
423
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
424
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
425
+ args.local_rank = env_local_rank
426
+
427
+ # Sanity checks
428
+ if args.dataset_name is None and args.train_data_dir is None:
429
+ raise ValueError("Need either a dataset name or a training folder.")
430
+
431
+ return args
432
+
433
+
434
+ DATASET_NAME_MAPPING = {
435
+ "lambdalabs/naruto-blip-captions": ("image", "text"),
436
+ }
437
+
438
+
439
+ def main():
440
+ args = parse_args()
441
+ if args.report_to == "wandb" and args.hub_token is not None:
442
+ raise ValueError(
443
+ "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
444
+ " Please use `huggingface-cli login` to authenticate with the Hub."
445
+ )
446
+
447
+ logging_dir = Path(args.output_dir, args.logging_dir)
448
+
449
+ accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
450
+
451
+ accelerator = Accelerator(
452
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
453
+ mixed_precision=args.mixed_precision,
454
+ log_with=args.report_to,
455
+ project_config=accelerator_project_config,
456
+ )
457
+
458
+ # Disable AMP for MPS.
459
+ if torch.backends.mps.is_available():
460
+ accelerator.native_amp = False
461
+
462
+ # Make one log on every process with the configuration for debugging.
463
+ logging.basicConfig(
464
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
465
+ datefmt="%m/%d/%Y %H:%M:%S",
466
+ level=logging.INFO,
467
+ )
468
+ logger.info(accelerator.state, main_process_only=False)
469
+ if accelerator.is_local_main_process:
470
+ datasets.utils.logging.set_verbosity_warning()
471
+ transformers.utils.logging.set_verbosity_warning()
472
+ diffusers.utils.logging.set_verbosity_info()
473
+ else:
474
+ datasets.utils.logging.set_verbosity_error()
475
+ transformers.utils.logging.set_verbosity_error()
476
+ diffusers.utils.logging.set_verbosity_error()
477
+
478
+ # If passed along, set the training seed now.
479
+ if args.seed is not None:
480
+ set_seed(args.seed)
481
+
482
+ # Handle the repository creation
483
+ if accelerator.is_main_process:
484
+ if args.output_dir is not None:
485
+ os.makedirs(args.output_dir, exist_ok=True)
486
+
487
+ if args.push_to_hub:
488
+ repo_id = create_repo(
489
+ repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
490
+ ).repo_id
491
+ # Load scheduler, tokenizer and models.
492
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
493
+ tokenizer = CLIPTokenizer.from_pretrained(
494
+ args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
495
+ )
496
+ text_encoder = CLIPTextModel.from_pretrained(
497
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision
498
+ )
499
+ vae = AutoencoderKL.from_pretrained(
500
+ args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision, variant=args.variant
501
+ )
502
+ unet = UNet2DConditionModel.from_pretrained(
503
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
504
+ )
505
+ # freeze parameters of models to save more memory
506
+ unet.requires_grad_(False)
507
+ vae.requires_grad_(False)
508
+ text_encoder.requires_grad_(False)
509
+
510
+ # For mixed precision training we cast all non-trainable weights (vae, non-lora text_encoder and non-lora unet) to half-precision
511
+ # as these weights are only used for inference, keeping weights in full precision is not required.
512
+ weight_dtype = torch.float32
513
+ if accelerator.mixed_precision == "fp16":
514
+ weight_dtype = torch.float16
515
+ elif accelerator.mixed_precision == "bf16":
516
+ weight_dtype = torch.bfloat16
517
+
518
+ unet_lora_config = LoraConfig(
519
+ r=args.rank,
520
+ lora_alpha=args.rank,
521
+ init_lora_weights="gaussian",
522
+ target_modules=["to_k", "to_q", "to_v", "to_out.0"],
523
+ )
524
+
525
+ # Move unet, vae and text_encoder to device and cast to weight_dtype
526
+ unet.to(accelerator.device, dtype=weight_dtype)
527
+ vae.to(accelerator.device, dtype=weight_dtype)
528
+ text_encoder.to(accelerator.device, dtype=weight_dtype)
529
+
530
+ # Add adapter and make sure the trainable params are in float32.
531
+ unet.add_adapter(unet_lora_config)
532
+ if args.mixed_precision == "fp16":
533
+ # only upcast trainable parameters (LoRA) into fp32
534
+ cast_training_params(unet, dtype=torch.float32)
535
+
536
+ if args.enable_xformers_memory_efficient_attention:
537
+ if is_xformers_available():
538
+ import xformers
539
+
540
+ xformers_version = version.parse(xformers.__version__)
541
+ if xformers_version == version.parse("0.0.16"):
542
+ logger.warning(
543
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
544
+ )
545
+ unet.enable_xformers_memory_efficient_attention()
546
+ else:
547
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
548
+
549
+ lora_layers = filter(lambda p: p.requires_grad, unet.parameters())
550
+
551
+ if args.gradient_checkpointing:
552
+ unet.enable_gradient_checkpointing()
553
+
554
+ # Enable TF32 for faster training on Ampere GPUs,
555
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
556
+ if args.allow_tf32:
557
+ torch.backends.cuda.matmul.allow_tf32 = True
558
+
559
+ if args.scale_lr:
560
+ args.learning_rate = (
561
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
562
+ )
563
+
564
+ # Initialize the optimizer
565
+ if args.use_8bit_adam:
566
+ try:
567
+ import bitsandbytes as bnb
568
+ except ImportError:
569
+ raise ImportError(
570
+ "Please install bitsandbytes to use 8-bit Adam. You can do so by running `pip install bitsandbytes`"
571
+ )
572
+
573
+ optimizer_cls = bnb.optim.AdamW8bit
574
+ else:
575
+ optimizer_cls = torch.optim.AdamW
576
+
577
+ optimizer = optimizer_cls(
578
+ lora_layers,
579
+ lr=args.learning_rate,
580
+ betas=(args.adam_beta1, args.adam_beta2),
581
+ weight_decay=args.adam_weight_decay,
582
+ eps=args.adam_epsilon,
583
+ )
584
+
585
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
586
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
587
+
588
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
589
+ # download the dataset.
590
+ if args.dataset_name is not None:
591
+ # Downloading and loading a dataset from the hub.
592
+ dataset = load_dataset(
593
+ args.dataset_name,
594
+ args.dataset_config_name,
595
+ cache_dir=args.cache_dir,
596
+ data_dir=args.train_data_dir,
597
+ )
598
+ else:
599
+ data_files = {}
600
+ if args.train_data_dir is not None:
601
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
602
+ dataset = load_dataset(
603
+ "imagefolder",
604
+ data_files=data_files,
605
+ cache_dir=args.cache_dir,
606
+ )
607
+ # See more about loading custom images at
608
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
609
+
610
+ # Preprocessing the datasets.
611
+ # We need to tokenize inputs and targets.
612
+ column_names = dataset["train"].column_names
613
+
614
+ # 6. Get the column names for input/target.
615
+ dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
616
+ if args.image_column is None:
617
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
618
+ else:
619
+ image_column = args.image_column
620
+ if image_column not in column_names:
621
+ raise ValueError(
622
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
623
+ )
624
+ if args.caption_column is None:
625
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
626
+ else:
627
+ caption_column = args.caption_column
628
+ if caption_column not in column_names:
629
+ raise ValueError(
630
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
631
+ )
632
+
633
+ # Preprocessing the datasets.
634
+ # We need to tokenize input captions and transform the images.
635
+ def tokenize_captions(examples, is_train=True):
636
+ captions = []
637
+ for caption in examples[caption_column]:
638
+ if isinstance(caption, str):
639
+ captions.append(caption)
640
+ elif isinstance(caption, (list, np.ndarray)):
641
+ # take a random caption if there are multiple
642
+ captions.append(random.choice(caption) if is_train else caption[0])
643
+ else:
644
+ raise ValueError(
645
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
646
+ )
647
+ inputs = tokenizer(
648
+ captions, max_length=tokenizer.model_max_length, padding="max_length", truncation=True, return_tensors="pt"
649
+ )
650
+ return inputs.input_ids
651
+
652
+ # Preprocessing the datasets.
653
+ train_transforms = transforms.Compose(
654
+ [
655
+ transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR),
656
+ transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution),
657
+ transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x),
658
+ transforms.ToTensor(),
659
+ transforms.Normalize([0.5], [0.5]),
660
+ ]
661
+ )
662
+
663
+ def unwrap_model(model):
664
+ model = accelerator.unwrap_model(model)
665
+ model = model._orig_mod if is_compiled_module(model) else model
666
+ return model
667
+
668
+ def preprocess_train(examples):
669
+ images = [image.convert("RGB") for image in examples[image_column]]
670
+ examples["pixel_values"] = [train_transforms(image) for image in images]
671
+ examples["input_ids"] = tokenize_captions(examples)
672
+ return examples
673
+
674
+ with accelerator.main_process_first():
675
+ if args.max_train_samples is not None:
676
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
677
+ # Set the training transforms
678
+ train_dataset = dataset["train"].with_transform(preprocess_train)
679
+
680
+ def collate_fn(examples):
681
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
682
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
683
+ input_ids = torch.stack([example["input_ids"] for example in examples])
684
+ return {"pixel_values": pixel_values, "input_ids": input_ids}
685
+
686
+ # DataLoaders creation:
687
+ train_dataloader = torch.utils.data.DataLoader(
688
+ train_dataset,
689
+ shuffle=True,
690
+ collate_fn=collate_fn,
691
+ batch_size=args.train_batch_size,
692
+ num_workers=args.dataloader_num_workers,
693
+ )
694
+
695
+ # Scheduler and math around the number of training steps.
696
+ # Check the PR https://github.com/huggingface/diffusers/pull/8312 for detailed explanation.
697
+ num_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes
698
+ if args.max_train_steps is None:
699
+ len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes)
700
+ num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps)
701
+ num_training_steps_for_scheduler = (
702
+ args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes
703
+ )
704
+ else:
705
+ num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes
706
+
707
+ lr_scheduler = get_scheduler(
708
+ args.lr_scheduler,
709
+ optimizer=optimizer,
710
+ num_warmup_steps=num_warmup_steps_for_scheduler,
711
+ num_training_steps=num_training_steps_for_scheduler,
712
+ )
713
+
714
+ # Prepare everything with our `accelerator`.
715
+ unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
716
+ unet, optimizer, train_dataloader, lr_scheduler
717
+ )
718
+
719
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
720
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
721
+ if args.max_train_steps is None:
722
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
723
+ if num_training_steps_for_scheduler != args.max_train_steps * accelerator.num_processes:
724
+ logger.warning(
725
+ f"The length of the 'train_dataloader' after 'accelerator.prepare' ({len(train_dataloader)}) does not match "
726
+ f"the expected length ({len_train_dataloader_after_sharding}) when the learning rate scheduler was created. "
727
+ f"This inconsistency may result in the learning rate scheduler not functioning properly."
728
+ )
729
+ # Afterwards we recalculate our number of training epochs
730
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
731
+
732
+ # We need to initialize the trackers we use, and also store our configuration.
733
+ # The trackers initializes automatically on the main process.
734
+ if accelerator.is_main_process:
735
+ accelerator.init_trackers("text2image-fine-tune", config=vars(args))
736
+
737
+ # Train!
738
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
739
+
740
+ logger.info("***** Running training *****")
741
+ logger.info(f" Num examples = {len(train_dataset)}")
742
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
743
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
744
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
745
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
746
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
747
+ global_step = 0
748
+ first_epoch = 0
749
+
750
+ # Potentially load in the weights and states from a previous save
751
+ if args.resume_from_checkpoint:
752
+ if args.resume_from_checkpoint != "latest":
753
+ path = os.path.basename(args.resume_from_checkpoint)
754
+ else:
755
+ # Get the most recent checkpoint
756
+ dirs = os.listdir(args.output_dir)
757
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
758
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
759
+ path = dirs[-1] if len(dirs) > 0 else None
760
+
761
+ if path is None:
762
+ accelerator.print(
763
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
764
+ )
765
+ args.resume_from_checkpoint = None
766
+ initial_global_step = 0
767
+ else:
768
+ accelerator.print(f"Resuming from checkpoint {path}")
769
+ accelerator.load_state(os.path.join(args.output_dir, path))
770
+ global_step = int(path.split("-")[1])
771
+
772
+ initial_global_step = global_step
773
+ first_epoch = global_step // num_update_steps_per_epoch
774
+ else:
775
+ initial_global_step = 0
776
+
777
+ progress_bar = tqdm(
778
+ range(0, args.max_train_steps),
779
+ initial=initial_global_step,
780
+ desc="Steps",
781
+ # Only show the progress bar once on each machine.
782
+ disable=not accelerator.is_local_main_process,
783
+ )
784
+
785
+ for epoch in range(first_epoch, args.num_train_epochs):
786
+ unet.train()
787
+ train_loss = 0.0
788
+ for step, batch in enumerate(train_dataloader):
789
+ with accelerator.accumulate(unet):
790
+ # Convert images to latent space
791
+ latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample()
792
+ latents = latents * vae.config.scaling_factor
793
+
794
+ # Sample noise that we'll add to the latents
795
+ noise = torch.randn_like(latents)
796
+ if args.noise_offset:
797
+ # https://www.crosslabs.org//blog/diffusion-with-offset-noise
798
+ noise += args.noise_offset * torch.randn(
799
+ (latents.shape[0], latents.shape[1], 1, 1), device=latents.device
800
+ )
801
+
802
+ bsz = latents.shape[0]
803
+ # Sample a random timestep for each image
804
+ timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device)
805
+ timesteps = timesteps.long()
806
+
807
+ # Add noise to the latents according to the noise magnitude at each timestep
808
+ # (this is the forward diffusion process)
809
+ noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
810
+
811
+ # Get the text embedding for conditioning
812
+ encoder_hidden_states = text_encoder(batch["input_ids"], return_dict=False)[0]
813
+
814
+ # Get the target for loss depending on the prediction type
815
+ if args.prediction_type is not None:
816
+ # set prediction_type of scheduler if defined
817
+ noise_scheduler.register_to_config(prediction_type=args.prediction_type)
818
+
819
+ if noise_scheduler.config.prediction_type == "epsilon":
820
+ target = noise
821
+ elif noise_scheduler.config.prediction_type == "v_prediction":
822
+ target = noise_scheduler.get_velocity(latents, noise, timesteps)
823
+ else:
824
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
825
+
826
+ # Predict the noise residual and compute loss
827
+ model_pred = unet(noisy_latents, timesteps, encoder_hidden_states, return_dict=False)[0]
828
+
829
+ if args.snr_gamma is None:
830
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
831
+ else:
832
+ # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556.
833
+ # Since we predict the noise instead of x_0, the original formulation is slightly changed.
834
+ # This is discussed in Section 4.2 of the same paper.
835
+ snr = compute_snr(noise_scheduler, timesteps)
836
+ mse_loss_weights = torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(
837
+ dim=1
838
+ )[0]
839
+ if noise_scheduler.config.prediction_type == "epsilon":
840
+ mse_loss_weights = mse_loss_weights / snr
841
+ elif noise_scheduler.config.prediction_type == "v_prediction":
842
+ mse_loss_weights = mse_loss_weights / (snr + 1)
843
+
844
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
845
+ loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights
846
+ loss = loss.mean()
847
+
848
+ # Gather the losses across all processes for logging (if we use distributed training).
849
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
850
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
851
+
852
+ # Backpropagate
853
+ accelerator.backward(loss)
854
+ if accelerator.sync_gradients:
855
+ params_to_clip = lora_layers
856
+ accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
857
+ optimizer.step()
858
+ lr_scheduler.step()
859
+ optimizer.zero_grad()
860
+
861
+ # Checks if the accelerator has performed an optimization step behind the scenes
862
+ if accelerator.sync_gradients:
863
+ progress_bar.update(1)
864
+ global_step += 1
865
+ accelerator.log({"train_loss": train_loss}, step=global_step)
866
+ train_loss = 0.0
867
+
868
+ if global_step % args.checkpointing_steps == 0:
869
+ if accelerator.is_main_process:
870
+ # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
871
+ if args.checkpoints_total_limit is not None:
872
+ checkpoints = os.listdir(args.output_dir)
873
+ checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
874
+ checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
875
+
876
+ # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
877
+ if len(checkpoints) >= args.checkpoints_total_limit:
878
+ num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
879
+ removing_checkpoints = checkpoints[0:num_to_remove]
880
+
881
+ logger.info(
882
+ f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
883
+ )
884
+ logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
885
+
886
+ for removing_checkpoint in removing_checkpoints:
887
+ removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
888
+ shutil.rmtree(removing_checkpoint)
889
+
890
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
891
+ accelerator.save_state(save_path)
892
+
893
+ unwrapped_unet = unwrap_model(unet)
894
+ unet_lora_state_dict = convert_state_dict_to_diffusers(
895
+ get_peft_model_state_dict(unwrapped_unet)
896
+ )
897
+
898
+ StableDiffusionPipeline.save_lora_weights(
899
+ save_directory=save_path,
900
+ unet_lora_layers=unet_lora_state_dict,
901
+ safe_serialization=True,
902
+ )
903
+
904
+ logger.info(f"Saved state to {save_path}")
905
+
906
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
907
+ progress_bar.set_postfix(**logs)
908
+
909
+ if global_step >= args.max_train_steps:
910
+ break
911
+
912
+ if accelerator.is_main_process:
913
+ if args.validation_prompt is not None and epoch % args.validation_epochs == 0:
914
+ # create pipeline
915
+ pipeline = DiffusionPipeline.from_pretrained(
916
+ args.pretrained_model_name_or_path,
917
+ unet=unwrap_model(unet),
918
+ revision=args.revision,
919
+ variant=args.variant,
920
+ torch_dtype=weight_dtype,
921
+ )
922
+ images = log_validation(pipeline, args, accelerator, epoch)
923
+
924
+ del pipeline
925
+ torch.cuda.empty_cache()
926
+
927
+ # Save the lora layers
928
+ accelerator.wait_for_everyone()
929
+ if accelerator.is_main_process:
930
+ unet = unet.to(torch.float32)
931
+
932
+ unwrapped_unet = unwrap_model(unet)
933
+ unet_lora_state_dict = convert_state_dict_to_diffusers(get_peft_model_state_dict(unwrapped_unet))
934
+ StableDiffusionPipeline.save_lora_weights(
935
+ save_directory=args.output_dir,
936
+ unet_lora_layers=unet_lora_state_dict,
937
+ safe_serialization=True,
938
+ )
939
+
940
+ # Final inference
941
+ # Load previous pipeline
942
+ if args.validation_prompt is not None:
943
+ pipeline = DiffusionPipeline.from_pretrained(
944
+ args.pretrained_model_name_or_path,
945
+ revision=args.revision,
946
+ variant=args.variant,
947
+ torch_dtype=weight_dtype,
948
+ )
949
+
950
+ # load attention processors
951
+ pipeline.load_lora_weights(args.output_dir)
952
+
953
+ # run inference
954
+ images = log_validation(pipeline, args, accelerator, epoch, is_final_validation=True)
955
+
956
+ if args.push_to_hub:
957
+ save_model_card(
958
+ repo_id,
959
+ images=images,
960
+ base_model=args.pretrained_model_name_or_path,
961
+ dataset_name=args.dataset_name,
962
+ repo_folder=args.output_dir,
963
+ )
964
+ upload_folder(
965
+ repo_id=repo_id,
966
+ folder_path=args.output_dir,
967
+ commit_message="End of training",
968
+ ignore_patterns=["step_*", "epoch_*"],
969
+ )
970
+
971
+ accelerator.end_training()
972
+
973
+
974
+ if __name__ == "__main__":
975
+ main()
train_text_to_image_lora_sdxl.py ADDED
@@ -0,0 +1,1327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Fine-tuning script for Stable Diffusion XL for text2image with support for LoRA."""
17
+
18
+ import argparse
19
+ import logging
20
+ import math
21
+ import os
22
+ import random
23
+ import shutil
24
+ from contextlib import nullcontext
25
+ from pathlib import Path
26
+
27
+ import datasets
28
+ import numpy as np
29
+ import torch
30
+ import torch.nn.functional as F
31
+ import torch.utils.checkpoint
32
+ import transformers
33
+ from accelerate import Accelerator
34
+ from accelerate.logging import get_logger
35
+ from accelerate.utils import DistributedDataParallelKwargs, DistributedType, ProjectConfiguration, set_seed
36
+ from datasets import load_dataset
37
+ from huggingface_hub import create_repo, upload_folder
38
+ from packaging import version
39
+ from peft import LoraConfig, set_peft_model_state_dict
40
+ from peft.utils import get_peft_model_state_dict
41
+ from torchvision import transforms
42
+ from torchvision.transforms.functional import crop
43
+ from tqdm.auto import tqdm
44
+ from transformers import AutoTokenizer, PretrainedConfig
45
+
46
+ import diffusers
47
+ from diffusers import (
48
+ AutoencoderKL,
49
+ DDPMScheduler,
50
+ StableDiffusionXLPipeline,
51
+ UNet2DConditionModel,
52
+ )
53
+ from diffusers.loaders import StableDiffusionLoraLoaderMixin
54
+ from diffusers.optimization import get_scheduler
55
+ from diffusers.training_utils import _set_state_dict_into_text_encoder, cast_training_params, compute_snr
56
+ from diffusers.utils import (
57
+ check_min_version,
58
+ convert_state_dict_to_diffusers,
59
+ convert_unet_state_dict_to_peft,
60
+ is_wandb_available,
61
+ )
62
+ from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
63
+ from diffusers.utils.import_utils import is_torch_npu_available, is_xformers_available
64
+ from diffusers.utils.torch_utils import is_compiled_module
65
+
66
+
67
+ if is_wandb_available():
68
+ import wandb
69
+
70
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
71
+ check_min_version("0.33.0.dev0")
72
+
73
+ logger = get_logger(__name__)
74
+ if is_torch_npu_available():
75
+ torch.npu.config.allow_internal_format = False
76
+
77
+
78
+ def save_model_card(
79
+ repo_id: str,
80
+ images: list = None,
81
+ base_model: str = None,
82
+ dataset_name: str = None,
83
+ train_text_encoder: bool = False,
84
+ repo_folder: str = None,
85
+ vae_path: str = None,
86
+ ):
87
+ img_str = ""
88
+ if images is not None:
89
+ for i, image in enumerate(images):
90
+ image.save(os.path.join(repo_folder, f"image_{i}.png"))
91
+ img_str += f"![img_{i}](./image_{i}.png)\n"
92
+
93
+ model_description = f"""
94
+ # LoRA text2image fine-tuning - {repo_id}
95
+
96
+ These are LoRA adaption weights for {base_model}. The weights were fine-tuned on the {dataset_name} dataset. You can find some example images in the following. \n
97
+ {img_str}
98
+
99
+ LoRA for the text encoder was enabled: {train_text_encoder}.
100
+
101
+ Special VAE used for training: {vae_path}.
102
+ """
103
+ model_card = load_or_create_model_card(
104
+ repo_id_or_path=repo_id,
105
+ from_training=True,
106
+ license="creativeml-openrail-m",
107
+ base_model=base_model,
108
+ model_description=model_description,
109
+ inference=True,
110
+ )
111
+
112
+ tags = [
113
+ "stable-diffusion-xl",
114
+ "stable-diffusion-xl-diffusers",
115
+ "text-to-image",
116
+ "diffusers",
117
+ "diffusers-training",
118
+ "lora",
119
+ ]
120
+ model_card = populate_model_card(model_card, tags=tags)
121
+
122
+ model_card.save(os.path.join(repo_folder, "README.md"))
123
+
124
+
125
+ def log_validation(
126
+ pipeline,
127
+ args,
128
+ accelerator,
129
+ epoch,
130
+ is_final_validation=False,
131
+ ):
132
+ logger.info(
133
+ f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
134
+ f" {args.validation_prompt}."
135
+ )
136
+ pipeline = pipeline.to(accelerator.device)
137
+ pipeline.set_progress_bar_config(disable=True)
138
+
139
+ # run inference
140
+ generator = torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed is not None else None
141
+ pipeline_args = {"prompt": args.validation_prompt}
142
+ if torch.backends.mps.is_available():
143
+ autocast_ctx = nullcontext()
144
+ else:
145
+ autocast_ctx = torch.autocast(accelerator.device.type)
146
+
147
+ with autocast_ctx:
148
+ images = [pipeline(**pipeline_args, generator=generator).images[0] for _ in range(args.num_validation_images)]
149
+
150
+ for tracker in accelerator.trackers:
151
+ phase_name = "test" if is_final_validation else "validation"
152
+ if tracker.name == "tensorboard":
153
+ np_images = np.stack([np.asarray(img) for img in images])
154
+ tracker.writer.add_images(phase_name, np_images, epoch, dataformats="NHWC")
155
+ if tracker.name == "wandb":
156
+ tracker.log(
157
+ {
158
+ phase_name: [
159
+ wandb.Image(image, caption=f"{i}: {args.validation_prompt}") for i, image in enumerate(images)
160
+ ]
161
+ }
162
+ )
163
+ return images
164
+
165
+
166
+ def import_model_class_from_model_name_or_path(
167
+ pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder"
168
+ ):
169
+ text_encoder_config = PretrainedConfig.from_pretrained(
170
+ pretrained_model_name_or_path, subfolder=subfolder, revision=revision
171
+ )
172
+ model_class = text_encoder_config.architectures[0]
173
+
174
+ if model_class == "CLIPTextModel":
175
+ from transformers import CLIPTextModel
176
+
177
+ return CLIPTextModel
178
+ elif model_class == "CLIPTextModelWithProjection":
179
+ from transformers import CLIPTextModelWithProjection
180
+
181
+ return CLIPTextModelWithProjection
182
+ else:
183
+ raise ValueError(f"{model_class} is not supported.")
184
+
185
+
186
+ def parse_args(input_args=None):
187
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
188
+ parser.add_argument(
189
+ "--pretrained_model_name_or_path",
190
+ type=str,
191
+ default=None,
192
+ required=True,
193
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
194
+ )
195
+ parser.add_argument(
196
+ "--pretrained_vae_model_name_or_path",
197
+ type=str,
198
+ default=None,
199
+ help="Path to pretrained VAE model with better numerical stability. More details: https://github.com/huggingface/diffusers/pull/4038.",
200
+ )
201
+ parser.add_argument(
202
+ "--revision",
203
+ type=str,
204
+ default=None,
205
+ required=False,
206
+ help="Revision of pretrained model identifier from huggingface.co/models.",
207
+ )
208
+ parser.add_argument(
209
+ "--variant",
210
+ type=str,
211
+ default=None,
212
+ help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16",
213
+ )
214
+ parser.add_argument(
215
+ "--dataset_name",
216
+ type=str,
217
+ default=None,
218
+ help=(
219
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
220
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
221
+ " or to a folder containing files that 🤗 Datasets can understand."
222
+ ),
223
+ )
224
+ parser.add_argument(
225
+ "--dataset_config_name",
226
+ type=str,
227
+ default=None,
228
+ help="The config of the Dataset, leave as None if there's only one config.",
229
+ )
230
+ parser.add_argument(
231
+ "--train_data_dir",
232
+ type=str,
233
+ default=None,
234
+ help=(
235
+ "A folder containing the training data. Folder contents must follow the structure described in"
236
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
237
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
238
+ ),
239
+ )
240
+ parser.add_argument(
241
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
242
+ )
243
+ parser.add_argument(
244
+ "--caption_column",
245
+ type=str,
246
+ default="text",
247
+ help="The column of the dataset containing a caption or a list of captions.",
248
+ )
249
+ parser.add_argument(
250
+ "--validation_prompt",
251
+ type=str,
252
+ default=None,
253
+ help="A prompt that is used during validation to verify that the model is learning.",
254
+ )
255
+ parser.add_argument(
256
+ "--num_validation_images",
257
+ type=int,
258
+ default=4,
259
+ help="Number of images that should be generated during validation with `validation_prompt`.",
260
+ )
261
+ parser.add_argument(
262
+ "--validation_epochs",
263
+ type=int,
264
+ default=1,
265
+ help=(
266
+ "Run fine-tuning validation every X epochs. The validation process consists of running the prompt"
267
+ " `args.validation_prompt` multiple times: `args.num_validation_images`."
268
+ ),
269
+ )
270
+ parser.add_argument(
271
+ "--max_train_samples",
272
+ type=int,
273
+ default=None,
274
+ help=(
275
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
276
+ "value if set."
277
+ ),
278
+ )
279
+ parser.add_argument(
280
+ "--output_dir",
281
+ type=str,
282
+ default="sd-model-finetuned-lora",
283
+ help="The output directory where the model predictions and checkpoints will be written.",
284
+ )
285
+ parser.add_argument(
286
+ "--cache_dir",
287
+ type=str,
288
+ default=None,
289
+ help="The directory where the downloaded models and datasets will be stored.",
290
+ )
291
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
292
+ parser.add_argument(
293
+ "--resolution",
294
+ type=int,
295
+ default=1024,
296
+ help=(
297
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
298
+ " resolution"
299
+ ),
300
+ )
301
+ parser.add_argument(
302
+ "--center_crop",
303
+ default=False,
304
+ action="store_true",
305
+ help=(
306
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
307
+ " cropped. The images will be resized to the resolution first before cropping."
308
+ ),
309
+ )
310
+ parser.add_argument(
311
+ "--random_flip",
312
+ action="store_true",
313
+ help="whether to randomly flip images horizontally",
314
+ )
315
+ parser.add_argument(
316
+ "--train_text_encoder",
317
+ action="store_true",
318
+ help="Whether to train the text encoder. If set, the text encoder should be float32 precision.",
319
+ )
320
+ parser.add_argument(
321
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
322
+ )
323
+ parser.add_argument("--num_train_epochs", type=int, default=100)
324
+ parser.add_argument(
325
+ "--max_train_steps",
326
+ type=int,
327
+ default=None,
328
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
329
+ )
330
+ parser.add_argument(
331
+ "--checkpointing_steps",
332
+ type=int,
333
+ default=500,
334
+ help=(
335
+ "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final"
336
+ " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming"
337
+ " training using `--resume_from_checkpoint`."
338
+ ),
339
+ )
340
+ parser.add_argument(
341
+ "--checkpoints_total_limit",
342
+ type=int,
343
+ default=None,
344
+ help=("Max number of checkpoints to store."),
345
+ )
346
+ parser.add_argument(
347
+ "--resume_from_checkpoint",
348
+ type=str,
349
+ default=None,
350
+ help=(
351
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
352
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
353
+ ),
354
+ )
355
+ parser.add_argument(
356
+ "--gradient_accumulation_steps",
357
+ type=int,
358
+ default=1,
359
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
360
+ )
361
+ parser.add_argument(
362
+ "--gradient_checkpointing",
363
+ action="store_true",
364
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
365
+ )
366
+ parser.add_argument(
367
+ "--learning_rate",
368
+ type=float,
369
+ default=1e-4,
370
+ help="Initial learning rate (after the potential warmup period) to use.",
371
+ )
372
+ parser.add_argument(
373
+ "--scale_lr",
374
+ action="store_true",
375
+ default=False,
376
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
377
+ )
378
+ parser.add_argument(
379
+ "--lr_scheduler",
380
+ type=str,
381
+ default="constant",
382
+ help=(
383
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
384
+ ' "constant", "constant_with_warmup"]'
385
+ ),
386
+ )
387
+ parser.add_argument(
388
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
389
+ )
390
+ parser.add_argument(
391
+ "--snr_gamma",
392
+ type=float,
393
+ default=None,
394
+ help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. "
395
+ "More details here: https://arxiv.org/abs/2303.09556.",
396
+ )
397
+ parser.add_argument(
398
+ "--allow_tf32",
399
+ action="store_true",
400
+ help=(
401
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
402
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
403
+ ),
404
+ )
405
+ parser.add_argument(
406
+ "--dataloader_num_workers",
407
+ type=int,
408
+ default=0,
409
+ help=(
410
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
411
+ ),
412
+ )
413
+ parser.add_argument(
414
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
415
+ )
416
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
417
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
418
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
419
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
420
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
421
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
422
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
423
+ parser.add_argument(
424
+ "--prediction_type",
425
+ type=str,
426
+ default=None,
427
+ help="The prediction_type that shall be used for training. Choose between 'epsilon' or 'v_prediction' or leave `None`. If left to `None` the default prediction type of the scheduler: `noise_scheduler.config.prediction_type` is chosen.",
428
+ )
429
+ parser.add_argument(
430
+ "--hub_model_id",
431
+ type=str,
432
+ default=None,
433
+ help="The name of the repository to keep in sync with the local `output_dir`.",
434
+ )
435
+ parser.add_argument(
436
+ "--logging_dir",
437
+ type=str,
438
+ default="logs",
439
+ help=(
440
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
441
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
442
+ ),
443
+ )
444
+ parser.add_argument(
445
+ "--report_to",
446
+ type=str,
447
+ default="tensorboard",
448
+ help=(
449
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
450
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
451
+ ),
452
+ )
453
+ parser.add_argument(
454
+ "--mixed_precision",
455
+ type=str,
456
+ default=None,
457
+ choices=["no", "fp16", "bf16"],
458
+ help=(
459
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
460
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
461
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
462
+ ),
463
+ )
464
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
465
+ parser.add_argument(
466
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
467
+ )
468
+ parser.add_argument(
469
+ "--enable_npu_flash_attention", action="store_true", help="Whether or not to use npu flash attention."
470
+ )
471
+ parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.")
472
+ parser.add_argument(
473
+ "--rank",
474
+ type=int,
475
+ default=4,
476
+ help=("The dimension of the LoRA update matrices."),
477
+ )
478
+ parser.add_argument(
479
+ "--debug_loss",
480
+ action="store_true",
481
+ help="debug loss for each image, if filenames are available in the dataset",
482
+ )
483
+
484
+ if input_args is not None:
485
+ args = parser.parse_args(input_args)
486
+ else:
487
+ args = parser.parse_args()
488
+
489
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
490
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
491
+ args.local_rank = env_local_rank
492
+
493
+ # Sanity checks
494
+ if args.dataset_name is None and args.train_data_dir is None:
495
+ raise ValueError("Need either a dataset name or a training folder.")
496
+
497
+ return args
498
+
499
+
500
+ DATASET_NAME_MAPPING = {
501
+ "lambdalabs/naruto-blip-captions": ("image", "text"),
502
+ }
503
+
504
+
505
+ def tokenize_prompt(tokenizer, prompt):
506
+ text_inputs = tokenizer(
507
+ prompt,
508
+ padding="max_length",
509
+ max_length=tokenizer.model_max_length,
510
+ truncation=True,
511
+ return_tensors="pt",
512
+ )
513
+ text_input_ids = text_inputs.input_ids
514
+ return text_input_ids
515
+
516
+
517
+ # Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt
518
+ def encode_prompt(text_encoders, tokenizers, prompt, text_input_ids_list=None):
519
+ prompt_embeds_list = []
520
+
521
+ for i, text_encoder in enumerate(text_encoders):
522
+ if tokenizers is not None:
523
+ tokenizer = tokenizers[i]
524
+ text_input_ids = tokenize_prompt(tokenizer, prompt)
525
+ else:
526
+ assert text_input_ids_list is not None
527
+ text_input_ids = text_input_ids_list[i]
528
+
529
+ prompt_embeds = text_encoder(
530
+ text_input_ids.to(text_encoder.device), output_hidden_states=True, return_dict=False
531
+ )
532
+
533
+ # We are only ALWAYS interested in the pooled output of the final text encoder
534
+ pooled_prompt_embeds = prompt_embeds[0]
535
+ prompt_embeds = prompt_embeds[-1][-2]
536
+ bs_embed, seq_len, _ = prompt_embeds.shape
537
+ prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1)
538
+ prompt_embeds_list.append(prompt_embeds)
539
+
540
+ prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
541
+ pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1)
542
+ return prompt_embeds, pooled_prompt_embeds
543
+
544
+
545
+ def main(args):
546
+ if args.report_to == "wandb" and args.hub_token is not None:
547
+ raise ValueError(
548
+ "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
549
+ " Please use `huggingface-cli login` to authenticate with the Hub."
550
+ )
551
+
552
+ logging_dir = Path(args.output_dir, args.logging_dir)
553
+
554
+ if torch.backends.mps.is_available() and args.mixed_precision == "bf16":
555
+ # due to pytorch#99272, MPS does not yet support bfloat16.
556
+ raise ValueError(
557
+ "Mixed precision training with bfloat16 is not supported on MPS. Please use fp16 (recommended) or fp32 instead."
558
+ )
559
+
560
+ accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
561
+ kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
562
+ accelerator = Accelerator(
563
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
564
+ mixed_precision=args.mixed_precision,
565
+ log_with=args.report_to,
566
+ project_config=accelerator_project_config,
567
+ kwargs_handlers=[kwargs],
568
+ )
569
+
570
+ # Make one log on every process with the configuration for debugging.
571
+ logging.basicConfig(
572
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
573
+ datefmt="%m/%d/%Y %H:%M:%S",
574
+ level=logging.INFO,
575
+ )
576
+ logger.info(accelerator.state, main_process_only=False)
577
+ if accelerator.is_local_main_process:
578
+ datasets.utils.logging.set_verbosity_warning()
579
+ transformers.utils.logging.set_verbosity_warning()
580
+ diffusers.utils.logging.set_verbosity_info()
581
+ else:
582
+ datasets.utils.logging.set_verbosity_error()
583
+ transformers.utils.logging.set_verbosity_error()
584
+ diffusers.utils.logging.set_verbosity_error()
585
+
586
+ # If passed along, set the training seed now.
587
+ if args.seed is not None:
588
+ set_seed(args.seed)
589
+
590
+ # Handle the repository creation
591
+ if accelerator.is_main_process:
592
+ if args.output_dir is not None:
593
+ os.makedirs(args.output_dir, exist_ok=True)
594
+
595
+ if args.push_to_hub:
596
+ repo_id = create_repo(
597
+ repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
598
+ ).repo_id
599
+
600
+ # Load the tokenizers
601
+ tokenizer_one = AutoTokenizer.from_pretrained(
602
+ args.pretrained_model_name_or_path,
603
+ subfolder="tokenizer",
604
+ revision=args.revision,
605
+ use_fast=False,
606
+ )
607
+ tokenizer_two = AutoTokenizer.from_pretrained(
608
+ args.pretrained_model_name_or_path,
609
+ subfolder="tokenizer_2",
610
+ revision=args.revision,
611
+ use_fast=False,
612
+ )
613
+
614
+ # import correct text encoder classes
615
+ text_encoder_cls_one = import_model_class_from_model_name_or_path(
616
+ args.pretrained_model_name_or_path, args.revision
617
+ )
618
+ text_encoder_cls_two = import_model_class_from_model_name_or_path(
619
+ args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2"
620
+ )
621
+
622
+ # Load scheduler and models
623
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
624
+ text_encoder_one = text_encoder_cls_one.from_pretrained(
625
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant
626
+ )
627
+ text_encoder_two = text_encoder_cls_two.from_pretrained(
628
+ args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision, variant=args.variant
629
+ )
630
+ vae_path = (
631
+ args.pretrained_model_name_or_path
632
+ if args.pretrained_vae_model_name_or_path is None
633
+ else args.pretrained_vae_model_name_or_path
634
+ )
635
+ vae = AutoencoderKL.from_pretrained(
636
+ vae_path,
637
+ subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
638
+ revision=args.revision,
639
+ variant=args.variant,
640
+ )
641
+ unet = UNet2DConditionModel.from_pretrained(
642
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
643
+ )
644
+
645
+ # We only train the additional adapter LoRA layers
646
+ vae.requires_grad_(False)
647
+ text_encoder_one.requires_grad_(False)
648
+ text_encoder_two.requires_grad_(False)
649
+ unet.requires_grad_(False)
650
+
651
+ # For mixed precision training we cast all non-trainable weights (vae, non-lora text_encoder and non-lora unet) to half-precision
652
+ # as these weights are only used for inference, keeping weights in full precision is not required.
653
+ weight_dtype = torch.float32
654
+ if accelerator.mixed_precision == "fp16":
655
+ weight_dtype = torch.float16
656
+ elif accelerator.mixed_precision == "bf16":
657
+ weight_dtype = torch.bfloat16
658
+
659
+ # Move unet, vae and text_encoder to device and cast to weight_dtype
660
+ # The VAE is in float32 to avoid NaN losses.
661
+ unet.to(accelerator.device, dtype=weight_dtype)
662
+
663
+ if args.pretrained_vae_model_name_or_path is None:
664
+ vae.to(accelerator.device, dtype=torch.float32)
665
+ else:
666
+ vae.to(accelerator.device, dtype=weight_dtype)
667
+ text_encoder_one.to(accelerator.device, dtype=weight_dtype)
668
+ text_encoder_two.to(accelerator.device, dtype=weight_dtype)
669
+
670
+ if args.enable_npu_flash_attention:
671
+ if is_torch_npu_available():
672
+ logger.info("npu flash attention enabled.")
673
+ unet.enable_npu_flash_attention()
674
+ else:
675
+ raise ValueError("npu flash attention requires torch_npu extensions and is supported only on npu devices.")
676
+
677
+ if args.enable_xformers_memory_efficient_attention:
678
+ if is_xformers_available():
679
+ import xformers
680
+
681
+ xformers_version = version.parse(xformers.__version__)
682
+ if xformers_version == version.parse("0.0.16"):
683
+ logger.warning(
684
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
685
+ )
686
+ unet.enable_xformers_memory_efficient_attention()
687
+ else:
688
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
689
+
690
+ # now we will add new LoRA weights to the attention layers
691
+ # Set correct lora layers
692
+ unet_lora_config = LoraConfig(
693
+ r=args.rank,
694
+ lora_alpha=args.rank,
695
+ init_lora_weights="gaussian",
696
+ target_modules=["to_k", "to_q", "to_v", "to_out.0"],
697
+ )
698
+
699
+ unet.add_adapter(unet_lora_config)
700
+
701
+ # The text encoder comes from 🤗 transformers, we will also attach adapters to it.
702
+ if args.train_text_encoder:
703
+ # ensure that dtype is float32, even if rest of the model that isn't trained is loaded in fp16
704
+ text_lora_config = LoraConfig(
705
+ r=args.rank,
706
+ lora_alpha=args.rank,
707
+ init_lora_weights="gaussian",
708
+ target_modules=["q_proj", "k_proj", "v_proj", "out_proj"],
709
+ )
710
+ text_encoder_one.add_adapter(text_lora_config)
711
+ text_encoder_two.add_adapter(text_lora_config)
712
+
713
+ def unwrap_model(model):
714
+ model = accelerator.unwrap_model(model)
715
+ model = model._orig_mod if is_compiled_module(model) else model
716
+ return model
717
+
718
+ # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
719
+ def save_model_hook(models, weights, output_dir):
720
+ if accelerator.is_main_process:
721
+ # there are only two options here. Either are just the unet attn processor layers
722
+ # or there are the unet and text encoder attn layers
723
+ unet_lora_layers_to_save = None
724
+ text_encoder_one_lora_layers_to_save = None
725
+ text_encoder_two_lora_layers_to_save = None
726
+
727
+ for model in models:
728
+ if isinstance(unwrap_model(model), type(unwrap_model(unet))):
729
+ unet_lora_layers_to_save = convert_state_dict_to_diffusers(get_peft_model_state_dict(model))
730
+ elif isinstance(unwrap_model(model), type(unwrap_model(text_encoder_one))):
731
+ text_encoder_one_lora_layers_to_save = convert_state_dict_to_diffusers(
732
+ get_peft_model_state_dict(model)
733
+ )
734
+ elif isinstance(unwrap_model(model), type(unwrap_model(text_encoder_two))):
735
+ text_encoder_two_lora_layers_to_save = convert_state_dict_to_diffusers(
736
+ get_peft_model_state_dict(model)
737
+ )
738
+ else:
739
+ raise ValueError(f"unexpected save model: {model.__class__}")
740
+
741
+ # make sure to pop weight so that corresponding model is not saved again
742
+ if weights:
743
+ weights.pop()
744
+
745
+ StableDiffusionXLPipeline.save_lora_weights(
746
+ output_dir,
747
+ unet_lora_layers=unet_lora_layers_to_save,
748
+ text_encoder_lora_layers=text_encoder_one_lora_layers_to_save,
749
+ text_encoder_2_lora_layers=text_encoder_two_lora_layers_to_save,
750
+ )
751
+
752
+ def load_model_hook(models, input_dir):
753
+ unet_ = None
754
+ text_encoder_one_ = None
755
+ text_encoder_two_ = None
756
+
757
+ while len(models) > 0:
758
+ model = models.pop()
759
+
760
+ if isinstance(model, type(unwrap_model(unet))):
761
+ unet_ = model
762
+ elif isinstance(model, type(unwrap_model(text_encoder_one))):
763
+ text_encoder_one_ = model
764
+ elif isinstance(model, type(unwrap_model(text_encoder_two))):
765
+ text_encoder_two_ = model
766
+ else:
767
+ raise ValueError(f"unexpected save model: {model.__class__}")
768
+
769
+ lora_state_dict, _ = StableDiffusionLoraLoaderMixin.lora_state_dict(input_dir)
770
+ unet_state_dict = {f"{k.replace('unet.', '')}": v for k, v in lora_state_dict.items() if k.startswith("unet.")}
771
+ unet_state_dict = convert_unet_state_dict_to_peft(unet_state_dict)
772
+ incompatible_keys = set_peft_model_state_dict(unet_, unet_state_dict, adapter_name="default")
773
+ if incompatible_keys is not None:
774
+ # check only for unexpected keys
775
+ unexpected_keys = getattr(incompatible_keys, "unexpected_keys", None)
776
+ if unexpected_keys:
777
+ logger.warning(
778
+ f"Loading adapter weights from state_dict led to unexpected keys not found in the model: "
779
+ f" {unexpected_keys}. "
780
+ )
781
+
782
+ if args.train_text_encoder:
783
+ _set_state_dict_into_text_encoder(lora_state_dict, prefix="text_encoder.", text_encoder=text_encoder_one_)
784
+
785
+ _set_state_dict_into_text_encoder(
786
+ lora_state_dict, prefix="text_encoder_2.", text_encoder=text_encoder_two_
787
+ )
788
+
789
+ # Make sure the trainable params are in float32. This is again needed since the base models
790
+ # are in `weight_dtype`. More details:
791
+ # https://github.com/huggingface/diffusers/pull/6514#discussion_r1449796804
792
+ if args.mixed_precision == "fp16":
793
+ models = [unet_]
794
+ if args.train_text_encoder:
795
+ models.extend([text_encoder_one_, text_encoder_two_])
796
+ cast_training_params(models, dtype=torch.float32)
797
+
798
+ accelerator.register_save_state_pre_hook(save_model_hook)
799
+ accelerator.register_load_state_pre_hook(load_model_hook)
800
+
801
+ if args.gradient_checkpointing:
802
+ unet.enable_gradient_checkpointing()
803
+ if args.train_text_encoder:
804
+ text_encoder_one.gradient_checkpointing_enable()
805
+ text_encoder_two.gradient_checkpointing_enable()
806
+
807
+ # Enable TF32 for faster training on Ampere GPUs,
808
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
809
+ if args.allow_tf32:
810
+ torch.backends.cuda.matmul.allow_tf32 = True
811
+
812
+ if args.scale_lr:
813
+ args.learning_rate = (
814
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
815
+ )
816
+
817
+ # Make sure the trainable params are in float32.
818
+ if args.mixed_precision == "fp16":
819
+ models = [unet]
820
+ if args.train_text_encoder:
821
+ models.extend([text_encoder_one, text_encoder_two])
822
+ cast_training_params(models, dtype=torch.float32)
823
+
824
+ # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
825
+ if args.use_8bit_adam:
826
+ try:
827
+ import bitsandbytes as bnb
828
+ except ImportError:
829
+ raise ImportError(
830
+ "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
831
+ )
832
+
833
+ optimizer_class = bnb.optim.AdamW8bit
834
+ else:
835
+ optimizer_class = torch.optim.AdamW
836
+
837
+ # Optimizer creation
838
+ params_to_optimize = list(filter(lambda p: p.requires_grad, unet.parameters()))
839
+ if args.train_text_encoder:
840
+ params_to_optimize = (
841
+ params_to_optimize
842
+ + list(filter(lambda p: p.requires_grad, text_encoder_one.parameters()))
843
+ + list(filter(lambda p: p.requires_grad, text_encoder_two.parameters()))
844
+ )
845
+ optimizer = optimizer_class(
846
+ params_to_optimize,
847
+ lr=args.learning_rate,
848
+ betas=(args.adam_beta1, args.adam_beta2),
849
+ weight_decay=args.adam_weight_decay,
850
+ eps=args.adam_epsilon,
851
+ )
852
+
853
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
854
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
855
+
856
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
857
+ # download the dataset.
858
+ if args.dataset_name is not None:
859
+ # Downloading and loading a dataset from the hub.
860
+ dataset = load_dataset(
861
+ args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir
862
+ )
863
+ else:
864
+ data_files = {}
865
+ if args.train_data_dir is not None:
866
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
867
+ dataset = load_dataset(
868
+ "imagefolder",
869
+ data_files=data_files,
870
+ cache_dir=args.cache_dir,
871
+ )
872
+ # See more about loading custom images at
873
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
874
+
875
+ # Preprocessing the datasets.
876
+ # We need to tokenize inputs and targets.
877
+ column_names = dataset["train"].column_names
878
+
879
+ # 6. Get the column names for input/target.
880
+ dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
881
+ if args.image_column is None:
882
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
883
+ else:
884
+ image_column = args.image_column
885
+ if image_column not in column_names:
886
+ raise ValueError(
887
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
888
+ )
889
+ if args.caption_column is None:
890
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
891
+ else:
892
+ caption_column = args.caption_column
893
+ if caption_column not in column_names:
894
+ raise ValueError(
895
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
896
+ )
897
+
898
+ # Preprocessing the datasets.
899
+ # We need to tokenize input captions and transform the images.
900
+ def tokenize_captions(examples, is_train=True):
901
+ captions = []
902
+ for caption in examples[caption_column]:
903
+ if isinstance(caption, str):
904
+ captions.append(caption)
905
+ elif isinstance(caption, (list, np.ndarray)):
906
+ # take a random caption if there are multiple
907
+ captions.append(random.choice(caption) if is_train else caption[0])
908
+ else:
909
+ raise ValueError(
910
+ f"Caption column `{caption_column}` should contain either strings or lists of strings."
911
+ )
912
+ tokens_one = tokenize_prompt(tokenizer_one, captions)
913
+ tokens_two = tokenize_prompt(tokenizer_two, captions)
914
+ return tokens_one, tokens_two
915
+
916
+ # Preprocessing the datasets.
917
+ train_resize = transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR)
918
+ train_crop = transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution)
919
+ train_flip = transforms.RandomHorizontalFlip(p=1.0)
920
+ train_transforms = transforms.Compose(
921
+ [
922
+ transforms.ToTensor(),
923
+ transforms.Normalize([0.5], [0.5]),
924
+ ]
925
+ )
926
+
927
+ def preprocess_train(examples):
928
+ images = [image.convert("RGB") for image in examples[image_column]]
929
+ # image aug
930
+ original_sizes = []
931
+ all_images = []
932
+ crop_top_lefts = []
933
+ for image in images:
934
+ original_sizes.append((image.height, image.width))
935
+ image = train_resize(image)
936
+ if args.random_flip and random.random() < 0.5:
937
+ # flip
938
+ image = train_flip(image)
939
+ if args.center_crop:
940
+ y1 = max(0, int(round((image.height - args.resolution) / 2.0)))
941
+ x1 = max(0, int(round((image.width - args.resolution) / 2.0)))
942
+ image = train_crop(image)
943
+ else:
944
+ y1, x1, h, w = train_crop.get_params(image, (args.resolution, args.resolution))
945
+ image = crop(image, y1, x1, h, w)
946
+ crop_top_left = (y1, x1)
947
+ crop_top_lefts.append(crop_top_left)
948
+ image = train_transforms(image)
949
+ all_images.append(image)
950
+
951
+ examples["original_sizes"] = original_sizes
952
+ examples["crop_top_lefts"] = crop_top_lefts
953
+ examples["pixel_values"] = all_images
954
+ tokens_one, tokens_two = tokenize_captions(examples)
955
+ examples["input_ids_one"] = tokens_one
956
+ examples["input_ids_two"] = tokens_two
957
+ if args.debug_loss:
958
+ fnames = [os.path.basename(image.filename) for image in examples[image_column] if image.filename]
959
+ if fnames:
960
+ examples["filenames"] = fnames
961
+ return examples
962
+
963
+ with accelerator.main_process_first():
964
+ if args.max_train_samples is not None:
965
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
966
+ # Set the training transforms
967
+ train_dataset = dataset["train"].with_transform(preprocess_train, output_all_columns=True)
968
+
969
+ def collate_fn(examples):
970
+ pixel_values = torch.stack([example["pixel_values"] for example in examples])
971
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
972
+ original_sizes = [example["original_sizes"] for example in examples]
973
+ crop_top_lefts = [example["crop_top_lefts"] for example in examples]
974
+ input_ids_one = torch.stack([example["input_ids_one"] for example in examples])
975
+ input_ids_two = torch.stack([example["input_ids_two"] for example in examples])
976
+ result = {
977
+ "pixel_values": pixel_values,
978
+ "input_ids_one": input_ids_one,
979
+ "input_ids_two": input_ids_two,
980
+ "original_sizes": original_sizes,
981
+ "crop_top_lefts": crop_top_lefts,
982
+ }
983
+
984
+ filenames = [example["filenames"] for example in examples if "filenames" in example]
985
+ if filenames:
986
+ result["filenames"] = filenames
987
+ return result
988
+
989
+ # DataLoaders creation:
990
+ train_dataloader = torch.utils.data.DataLoader(
991
+ train_dataset,
992
+ shuffle=True,
993
+ collate_fn=collate_fn,
994
+ batch_size=args.train_batch_size,
995
+ num_workers=args.dataloader_num_workers,
996
+ )
997
+
998
+ # Scheduler and math around the number of training steps.
999
+ overrode_max_train_steps = False
1000
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
1001
+ if args.max_train_steps is None:
1002
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
1003
+ overrode_max_train_steps = True
1004
+
1005
+ lr_scheduler = get_scheduler(
1006
+ args.lr_scheduler,
1007
+ optimizer=optimizer,
1008
+ num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
1009
+ num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
1010
+ )
1011
+
1012
+ # Prepare everything with our `accelerator`.
1013
+ if args.train_text_encoder:
1014
+ unet, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
1015
+ unet, text_encoder_one, text_encoder_two, optimizer, train_dataloader, lr_scheduler
1016
+ )
1017
+ else:
1018
+ unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
1019
+ unet, optimizer, train_dataloader, lr_scheduler
1020
+ )
1021
+
1022
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
1023
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
1024
+ if overrode_max_train_steps:
1025
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
1026
+ # Afterwards we recalculate our number of training epochs
1027
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
1028
+
1029
+ # We need to initialize the trackers we use, and also store our configuration.
1030
+ # The trackers initializes automatically on the main process.
1031
+ if accelerator.is_main_process:
1032
+ accelerator.init_trackers("text2image-fine-tune", config=vars(args))
1033
+
1034
+ # Train!
1035
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
1036
+
1037
+ logger.info("***** Running training *****")
1038
+ logger.info(f" Num examples = {len(train_dataset)}")
1039
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
1040
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
1041
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
1042
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
1043
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
1044
+ global_step = 0
1045
+ first_epoch = 0
1046
+
1047
+ # Potentially load in the weights and states from a previous save
1048
+ if args.resume_from_checkpoint:
1049
+ if args.resume_from_checkpoint != "latest":
1050
+ path = os.path.basename(args.resume_from_checkpoint)
1051
+ else:
1052
+ # Get the most recent checkpoint
1053
+ dirs = os.listdir(args.output_dir)
1054
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
1055
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
1056
+ path = dirs[-1] if len(dirs) > 0 else None
1057
+
1058
+ if path is None:
1059
+ accelerator.print(
1060
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
1061
+ )
1062
+ args.resume_from_checkpoint = None
1063
+ initial_global_step = 0
1064
+ else:
1065
+ accelerator.print(f"Resuming from checkpoint {path}")
1066
+ accelerator.load_state(os.path.join(args.output_dir, path))
1067
+ global_step = int(path.split("-")[1])
1068
+
1069
+ initial_global_step = global_step
1070
+ first_epoch = global_step // num_update_steps_per_epoch
1071
+
1072
+ else:
1073
+ initial_global_step = 0
1074
+
1075
+ progress_bar = tqdm(
1076
+ range(0, args.max_train_steps),
1077
+ initial=initial_global_step,
1078
+ desc="Steps",
1079
+ # Only show the progress bar once on each machine.
1080
+ disable=not accelerator.is_local_main_process,
1081
+ )
1082
+
1083
+ for epoch in range(first_epoch, args.num_train_epochs):
1084
+ unet.train()
1085
+ if args.train_text_encoder:
1086
+ text_encoder_one.train()
1087
+ text_encoder_two.train()
1088
+ train_loss = 0.0
1089
+ for step, batch in enumerate(train_dataloader):
1090
+ with accelerator.accumulate(unet):
1091
+ # Convert images to latent space
1092
+ if args.pretrained_vae_model_name_or_path is not None:
1093
+ pixel_values = batch["pixel_values"].to(dtype=weight_dtype)
1094
+ else:
1095
+ pixel_values = batch["pixel_values"]
1096
+
1097
+ model_input = vae.encode(pixel_values).latent_dist.sample()
1098
+ model_input = model_input * vae.config.scaling_factor
1099
+ if args.pretrained_vae_model_name_or_path is None:
1100
+ model_input = model_input.to(weight_dtype)
1101
+
1102
+ # Sample noise that we'll add to the latents
1103
+ noise = torch.randn_like(model_input)
1104
+ if args.noise_offset:
1105
+ # https://www.crosslabs.org//blog/diffusion-with-offset-noise
1106
+ noise += args.noise_offset * torch.randn(
1107
+ (model_input.shape[0], model_input.shape[1], 1, 1), device=model_input.device
1108
+ )
1109
+
1110
+ bsz = model_input.shape[0]
1111
+ # Sample a random timestep for each image
1112
+ timesteps = torch.randint(
1113
+ 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=model_input.device
1114
+ )
1115
+ timesteps = timesteps.long()
1116
+
1117
+ # Add noise to the model input according to the noise magnitude at each timestep
1118
+ # (this is the forward diffusion process)
1119
+ noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps)
1120
+
1121
+ # time ids
1122
+ def compute_time_ids(original_size, crops_coords_top_left):
1123
+ # Adapted from pipeline.StableDiffusionXLPipeline._get_add_time_ids
1124
+ target_size = (args.resolution, args.resolution)
1125
+ add_time_ids = list(original_size + crops_coords_top_left + target_size)
1126
+ add_time_ids = torch.tensor([add_time_ids])
1127
+ add_time_ids = add_time_ids.to(accelerator.device, dtype=weight_dtype)
1128
+ return add_time_ids
1129
+
1130
+ add_time_ids = torch.cat(
1131
+ [compute_time_ids(s, c) for s, c in zip(batch["original_sizes"], batch["crop_top_lefts"])]
1132
+ )
1133
+
1134
+ # Predict the noise residual
1135
+ unet_added_conditions = {"time_ids": add_time_ids}
1136
+ prompt_embeds, pooled_prompt_embeds = encode_prompt(
1137
+ text_encoders=[text_encoder_one, text_encoder_two],
1138
+ tokenizers=None,
1139
+ prompt=None,
1140
+ text_input_ids_list=[batch["input_ids_one"], batch["input_ids_two"]],
1141
+ )
1142
+ unet_added_conditions.update({"text_embeds": pooled_prompt_embeds})
1143
+ model_pred = unet(
1144
+ noisy_model_input,
1145
+ timesteps,
1146
+ prompt_embeds,
1147
+ added_cond_kwargs=unet_added_conditions,
1148
+ return_dict=False,
1149
+ )[0]
1150
+
1151
+ # Get the target for loss depending on the prediction type
1152
+ if args.prediction_type is not None:
1153
+ # set prediction_type of scheduler if defined
1154
+ noise_scheduler.register_to_config(prediction_type=args.prediction_type)
1155
+
1156
+ if noise_scheduler.config.prediction_type == "epsilon":
1157
+ target = noise
1158
+ elif noise_scheduler.config.prediction_type == "v_prediction":
1159
+ target = noise_scheduler.get_velocity(model_input, noise, timesteps)
1160
+ else:
1161
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
1162
+
1163
+ if args.snr_gamma is None:
1164
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
1165
+ else:
1166
+ # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556.
1167
+ # Since we predict the noise instead of x_0, the original formulation is slightly changed.
1168
+ # This is discussed in Section 4.2 of the same paper.
1169
+ snr = compute_snr(noise_scheduler, timesteps)
1170
+ mse_loss_weights = torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(
1171
+ dim=1
1172
+ )[0]
1173
+ if noise_scheduler.config.prediction_type == "epsilon":
1174
+ mse_loss_weights = mse_loss_weights / snr
1175
+ elif noise_scheduler.config.prediction_type == "v_prediction":
1176
+ mse_loss_weights = mse_loss_weights / (snr + 1)
1177
+
1178
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
1179
+ loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights
1180
+ loss = loss.mean()
1181
+ if args.debug_loss and "filenames" in batch:
1182
+ for fname in batch["filenames"]:
1183
+ accelerator.log({"loss_for_" + fname: loss}, step=global_step)
1184
+ # Gather the losses across all processes for logging (if we use distributed training).
1185
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
1186
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
1187
+
1188
+ # Backpropagate
1189
+ accelerator.backward(loss)
1190
+ if accelerator.sync_gradients:
1191
+ accelerator.clip_grad_norm_(params_to_optimize, args.max_grad_norm)
1192
+ optimizer.step()
1193
+ lr_scheduler.step()
1194
+ optimizer.zero_grad()
1195
+
1196
+ # Checks if the accelerator has performed an optimization step behind the scenes
1197
+ if accelerator.sync_gradients:
1198
+ progress_bar.update(1)
1199
+ global_step += 1
1200
+ accelerator.log({"train_loss": train_loss}, step=global_step)
1201
+ train_loss = 0.0
1202
+
1203
+ # DeepSpeed requires saving weights on every device; saving weights only on the main process would cause issues.
1204
+ if accelerator.distributed_type == DistributedType.DEEPSPEED or accelerator.is_main_process:
1205
+ if global_step % args.checkpointing_steps == 0:
1206
+ # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
1207
+ if args.checkpoints_total_limit is not None:
1208
+ checkpoints = os.listdir(args.output_dir)
1209
+ checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
1210
+ checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
1211
+
1212
+ # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
1213
+ if len(checkpoints) >= args.checkpoints_total_limit:
1214
+ num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
1215
+ removing_checkpoints = checkpoints[0:num_to_remove]
1216
+
1217
+ logger.info(
1218
+ f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
1219
+ )
1220
+ logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
1221
+
1222
+ for removing_checkpoint in removing_checkpoints:
1223
+ removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
1224
+ shutil.rmtree(removing_checkpoint)
1225
+
1226
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
1227
+ accelerator.save_state(save_path)
1228
+ logger.info(f"Saved state to {save_path}")
1229
+
1230
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
1231
+ progress_bar.set_postfix(**logs)
1232
+
1233
+ if global_step >= args.max_train_steps:
1234
+ break
1235
+
1236
+ if accelerator.is_main_process:
1237
+ if args.validation_prompt is not None and epoch % args.validation_epochs == 0:
1238
+ # create pipeline
1239
+ pipeline = StableDiffusionXLPipeline.from_pretrained(
1240
+ args.pretrained_model_name_or_path,
1241
+ vae=vae,
1242
+ text_encoder=unwrap_model(text_encoder_one),
1243
+ text_encoder_2=unwrap_model(text_encoder_two),
1244
+ unet=unwrap_model(unet),
1245
+ revision=args.revision,
1246
+ variant=args.variant,
1247
+ torch_dtype=weight_dtype,
1248
+ )
1249
+
1250
+ images = log_validation(pipeline, args, accelerator, epoch)
1251
+
1252
+ del pipeline
1253
+ torch.cuda.empty_cache()
1254
+
1255
+ # Save the lora layers
1256
+ accelerator.wait_for_everyone()
1257
+ if accelerator.is_main_process:
1258
+ unet = unwrap_model(unet)
1259
+ unet_lora_state_dict = convert_state_dict_to_diffusers(get_peft_model_state_dict(unet))
1260
+
1261
+ if args.train_text_encoder:
1262
+ text_encoder_one = unwrap_model(text_encoder_one)
1263
+ text_encoder_two = unwrap_model(text_encoder_two)
1264
+
1265
+ text_encoder_lora_layers = convert_state_dict_to_diffusers(get_peft_model_state_dict(text_encoder_one))
1266
+ text_encoder_2_lora_layers = convert_state_dict_to_diffusers(get_peft_model_state_dict(text_encoder_two))
1267
+ else:
1268
+ text_encoder_lora_layers = None
1269
+ text_encoder_2_lora_layers = None
1270
+
1271
+ StableDiffusionXLPipeline.save_lora_weights(
1272
+ save_directory=args.output_dir,
1273
+ unet_lora_layers=unet_lora_state_dict,
1274
+ text_encoder_lora_layers=text_encoder_lora_layers,
1275
+ text_encoder_2_lora_layers=text_encoder_2_lora_layers,
1276
+ )
1277
+
1278
+ del unet
1279
+ del text_encoder_one
1280
+ del text_encoder_two
1281
+ del text_encoder_lora_layers
1282
+ del text_encoder_2_lora_layers
1283
+ torch.cuda.empty_cache()
1284
+
1285
+ # Final inference
1286
+ # Make sure vae.dtype is consistent with the unet.dtype
1287
+ if args.mixed_precision == "fp16":
1288
+ vae.to(weight_dtype)
1289
+ # Load previous pipeline
1290
+ pipeline = StableDiffusionXLPipeline.from_pretrained(
1291
+ args.pretrained_model_name_or_path,
1292
+ vae=vae,
1293
+ revision=args.revision,
1294
+ variant=args.variant,
1295
+ torch_dtype=weight_dtype,
1296
+ )
1297
+
1298
+ # load attention processors
1299
+ pipeline.load_lora_weights(args.output_dir)
1300
+
1301
+ # run inference
1302
+ if args.validation_prompt and args.num_validation_images > 0:
1303
+ images = log_validation(pipeline, args, accelerator, epoch, is_final_validation=True)
1304
+
1305
+ if args.push_to_hub:
1306
+ save_model_card(
1307
+ repo_id,
1308
+ images=images,
1309
+ base_model=args.pretrained_model_name_or_path,
1310
+ dataset_name=args.dataset_name,
1311
+ train_text_encoder=args.train_text_encoder,
1312
+ repo_folder=args.output_dir,
1313
+ vae_path=args.pretrained_vae_model_name_or_path,
1314
+ )
1315
+ upload_folder(
1316
+ repo_id=repo_id,
1317
+ folder_path=args.output_dir,
1318
+ commit_message="End of training",
1319
+ ignore_patterns=["step_*", "epoch_*"],
1320
+ )
1321
+
1322
+ accelerator.end_training()
1323
+
1324
+
1325
+ if __name__ == "__main__":
1326
+ args = parse_args()
1327
+ main(args)
train_text_to_image_sdxl.py ADDED
@@ -0,0 +1,1358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # coding=utf-8
3
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Fine-tuning script for Stable Diffusion XL for text2image."""
17
+
18
+ import argparse
19
+ import functools
20
+ import gc
21
+ import logging
22
+ import math
23
+ import os
24
+ import random
25
+ import shutil
26
+ from contextlib import nullcontext
27
+ from pathlib import Path
28
+
29
+ import accelerate
30
+ import datasets
31
+ import numpy as np
32
+ import torch
33
+ import torch.nn.functional as F
34
+ import torch.utils.checkpoint
35
+ import transformers
36
+ from accelerate import Accelerator
37
+ from accelerate.logging import get_logger
38
+ from accelerate.utils import DistributedType, ProjectConfiguration, set_seed
39
+ from datasets import concatenate_datasets, load_dataset
40
+ from huggingface_hub import create_repo, upload_folder
41
+ from packaging import version
42
+ from torchvision import transforms
43
+ from torchvision.transforms.functional import crop
44
+ from tqdm.auto import tqdm
45
+ from transformers import AutoTokenizer, PretrainedConfig
46
+
47
+ import diffusers
48
+ from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionXLPipeline, UNet2DConditionModel
49
+ from diffusers.optimization import get_scheduler
50
+ from diffusers.training_utils import EMAModel, compute_snr
51
+ from diffusers.utils import check_min_version, is_wandb_available
52
+ from diffusers.utils.hub_utils import load_or_create_model_card, populate_model_card
53
+ from diffusers.utils.import_utils import is_torch_npu_available, is_xformers_available
54
+ from diffusers.utils.torch_utils import is_compiled_module
55
+
56
+
57
+ # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
58
+ check_min_version("0.33.0.dev0")
59
+
60
+ logger = get_logger(__name__)
61
+ if is_torch_npu_available():
62
+ import torch_npu
63
+
64
+ torch.npu.config.allow_internal_format = False
65
+
66
+ DATASET_NAME_MAPPING = {
67
+ "lambdalabs/naruto-blip-captions": ("image", "text"),
68
+ }
69
+
70
+
71
+ def save_model_card(
72
+ repo_id: str,
73
+ images: list = None,
74
+ validation_prompt: str = None,
75
+ base_model: str = None,
76
+ dataset_name: str = None,
77
+ repo_folder: str = None,
78
+ vae_path: str = None,
79
+ ):
80
+ img_str = ""
81
+ if images is not None:
82
+ for i, image in enumerate(images):
83
+ image.save(os.path.join(repo_folder, f"image_{i}.png"))
84
+ img_str += f"![img_{i}](./image_{i}.png)\n"
85
+
86
+ model_description = f"""
87
+ # Text-to-image finetuning - {repo_id}
88
+
89
+ This pipeline was finetuned from **{base_model}** on the **{dataset_name}** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: {validation_prompt}: \n
90
+ {img_str}
91
+
92
+ Special VAE used for training: {vae_path}.
93
+ """
94
+
95
+ model_card = load_or_create_model_card(
96
+ repo_id_or_path=repo_id,
97
+ from_training=True,
98
+ license="creativeml-openrail-m",
99
+ base_model=base_model,
100
+ model_description=model_description,
101
+ inference=True,
102
+ )
103
+
104
+ tags = [
105
+ "stable-diffusion-xl",
106
+ "stable-diffusion-xl-diffusers",
107
+ "text-to-image",
108
+ "diffusers-training",
109
+ "diffusers",
110
+ ]
111
+ model_card = populate_model_card(model_card, tags=tags)
112
+
113
+ model_card.save(os.path.join(repo_folder, "README.md"))
114
+
115
+
116
+ def import_model_class_from_model_name_or_path(
117
+ pretrained_model_name_or_path: str, revision: str, subfolder: str = "text_encoder"
118
+ ):
119
+ text_encoder_config = PretrainedConfig.from_pretrained(
120
+ pretrained_model_name_or_path, subfolder=subfolder, revision=revision
121
+ )
122
+ model_class = text_encoder_config.architectures[0]
123
+
124
+ if model_class == "CLIPTextModel":
125
+ from transformers import CLIPTextModel
126
+
127
+ return CLIPTextModel
128
+ elif model_class == "CLIPTextModelWithProjection":
129
+ from transformers import CLIPTextModelWithProjection
130
+
131
+ return CLIPTextModelWithProjection
132
+ else:
133
+ raise ValueError(f"{model_class} is not supported.")
134
+
135
+
136
+ def parse_args(input_args=None):
137
+ parser = argparse.ArgumentParser(description="Simple example of a training script.")
138
+ parser.add_argument(
139
+ "--pretrained_model_name_or_path",
140
+ type=str,
141
+ default=None,
142
+ required=True,
143
+ help="Path to pretrained model or model identifier from huggingface.co/models.",
144
+ )
145
+ parser.add_argument(
146
+ "--pretrained_vae_model_name_or_path",
147
+ type=str,
148
+ default=None,
149
+ help="Path to pretrained VAE model with better numerical stability. More details: https://github.com/huggingface/diffusers/pull/4038.",
150
+ )
151
+ parser.add_argument(
152
+ "--revision",
153
+ type=str,
154
+ default=None,
155
+ required=False,
156
+ help="Revision of pretrained model identifier from huggingface.co/models.",
157
+ )
158
+ parser.add_argument(
159
+ "--variant",
160
+ type=str,
161
+ default=None,
162
+ help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16",
163
+ )
164
+ parser.add_argument(
165
+ "--dataset_name",
166
+ type=str,
167
+ default=None,
168
+ help=(
169
+ "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private,"
170
+ " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem,"
171
+ " or to a folder containing files that 🤗 Datasets can understand."
172
+ ),
173
+ )
174
+ parser.add_argument(
175
+ "--dataset_config_name",
176
+ type=str,
177
+ default=None,
178
+ help="The config of the Dataset, leave as None if there's only one config.",
179
+ )
180
+ parser.add_argument(
181
+ "--train_data_dir",
182
+ type=str,
183
+ default=None,
184
+ help=(
185
+ "A folder containing the training data. Folder contents must follow the structure described in"
186
+ " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file"
187
+ " must exist to provide the captions for the images. Ignored if `dataset_name` is specified."
188
+ ),
189
+ )
190
+ parser.add_argument(
191
+ "--image_column", type=str, default="image", help="The column of the dataset containing an image."
192
+ )
193
+ parser.add_argument(
194
+ "--caption_column",
195
+ type=str,
196
+ default="text",
197
+ help="The column of the dataset containing a caption or a list of captions.",
198
+ )
199
+ parser.add_argument(
200
+ "--validation_prompt",
201
+ type=str,
202
+ default=None,
203
+ help="A prompt that is used during validation to verify that the model is learning.",
204
+ )
205
+ parser.add_argument(
206
+ "--num_validation_images",
207
+ type=int,
208
+ default=4,
209
+ help="Number of images that should be generated during validation with `validation_prompt`.",
210
+ )
211
+ parser.add_argument(
212
+ "--validation_epochs",
213
+ type=int,
214
+ default=1,
215
+ help=(
216
+ "Run fine-tuning validation every X epochs. The validation process consists of running the prompt"
217
+ " `args.validation_prompt` multiple times: `args.num_validation_images`."
218
+ ),
219
+ )
220
+ parser.add_argument(
221
+ "--max_train_samples",
222
+ type=int,
223
+ default=None,
224
+ help=(
225
+ "For debugging purposes or quicker training, truncate the number of training examples to this "
226
+ "value if set."
227
+ ),
228
+ )
229
+ parser.add_argument(
230
+ "--proportion_empty_prompts",
231
+ type=float,
232
+ default=0,
233
+ help="Proportion of image prompts to be replaced with empty strings. Defaults to 0 (no prompt replacement).",
234
+ )
235
+ parser.add_argument(
236
+ "--output_dir",
237
+ type=str,
238
+ default="sdxl-model-finetuned",
239
+ help="The output directory where the model predictions and checkpoints will be written.",
240
+ )
241
+ parser.add_argument(
242
+ "--cache_dir",
243
+ type=str,
244
+ default=None,
245
+ help="The directory where the downloaded models and datasets will be stored.",
246
+ )
247
+ parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
248
+ parser.add_argument(
249
+ "--resolution",
250
+ type=int,
251
+ default=1024,
252
+ help=(
253
+ "The resolution for input images, all the images in the train/validation dataset will be resized to this"
254
+ " resolution"
255
+ ),
256
+ )
257
+ parser.add_argument(
258
+ "--center_crop",
259
+ default=False,
260
+ action="store_true",
261
+ help=(
262
+ "Whether to center crop the input images to the resolution. If not set, the images will be randomly"
263
+ " cropped. The images will be resized to the resolution first before cropping."
264
+ ),
265
+ )
266
+ parser.add_argument(
267
+ "--random_flip",
268
+ action="store_true",
269
+ help="whether to randomly flip images horizontally",
270
+ )
271
+ parser.add_argument(
272
+ "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
273
+ )
274
+ parser.add_argument("--num_train_epochs", type=int, default=100)
275
+ parser.add_argument(
276
+ "--max_train_steps",
277
+ type=int,
278
+ default=None,
279
+ help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
280
+ )
281
+ parser.add_argument(
282
+ "--checkpointing_steps",
283
+ type=int,
284
+ default=500,
285
+ help=(
286
+ "Save a checkpoint of the training state every X updates. These checkpoints can be used both as final"
287
+ " checkpoints in case they are better than the last checkpoint, and are also suitable for resuming"
288
+ " training using `--resume_from_checkpoint`."
289
+ ),
290
+ )
291
+ parser.add_argument(
292
+ "--checkpoints_total_limit",
293
+ type=int,
294
+ default=None,
295
+ help=("Max number of checkpoints to store."),
296
+ )
297
+ parser.add_argument(
298
+ "--resume_from_checkpoint",
299
+ type=str,
300
+ default=None,
301
+ help=(
302
+ "Whether training should be resumed from a previous checkpoint. Use a path saved by"
303
+ ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.'
304
+ ),
305
+ )
306
+ parser.add_argument(
307
+ "--gradient_accumulation_steps",
308
+ type=int,
309
+ default=1,
310
+ help="Number of updates steps to accumulate before performing a backward/update pass.",
311
+ )
312
+ parser.add_argument(
313
+ "--gradient_checkpointing",
314
+ action="store_true",
315
+ help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.",
316
+ )
317
+ parser.add_argument(
318
+ "--learning_rate",
319
+ type=float,
320
+ default=1e-4,
321
+ help="Initial learning rate (after the potential warmup period) to use.",
322
+ )
323
+ parser.add_argument(
324
+ "--scale_lr",
325
+ action="store_true",
326
+ default=False,
327
+ help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
328
+ )
329
+ parser.add_argument(
330
+ "--lr_scheduler",
331
+ type=str,
332
+ default="constant",
333
+ help=(
334
+ 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
335
+ ' "constant", "constant_with_warmup"]'
336
+ ),
337
+ )
338
+ parser.add_argument(
339
+ "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
340
+ )
341
+ parser.add_argument(
342
+ "--timestep_bias_strategy",
343
+ type=str,
344
+ default="none",
345
+ choices=["earlier", "later", "range", "none"],
346
+ help=(
347
+ "The timestep bias strategy, which may help direct the model toward learning low or high frequency details."
348
+ " Choices: ['earlier', 'later', 'range', 'none']."
349
+ " The default is 'none', which means no bias is applied, and training proceeds normally."
350
+ " The value of 'later' will increase the frequency of the model's final training timesteps."
351
+ ),
352
+ )
353
+ parser.add_argument(
354
+ "--timestep_bias_multiplier",
355
+ type=float,
356
+ default=1.0,
357
+ help=(
358
+ "The multiplier for the bias. Defaults to 1.0, which means no bias is applied."
359
+ " A value of 2.0 will double the weight of the bias, and a value of 0.5 will halve it."
360
+ ),
361
+ )
362
+ parser.add_argument(
363
+ "--timestep_bias_begin",
364
+ type=int,
365
+ default=0,
366
+ help=(
367
+ "When using `--timestep_bias_strategy=range`, the beginning (inclusive) timestep to bias."
368
+ " Defaults to zero, which equates to having no specific bias."
369
+ ),
370
+ )
371
+ parser.add_argument(
372
+ "--timestep_bias_end",
373
+ type=int,
374
+ default=1000,
375
+ help=(
376
+ "When using `--timestep_bias_strategy=range`, the final timestep (inclusive) to bias."
377
+ " Defaults to 1000, which is the number of timesteps that Stable Diffusion is trained on."
378
+ ),
379
+ )
380
+ parser.add_argument(
381
+ "--timestep_bias_portion",
382
+ type=float,
383
+ default=0.25,
384
+ help=(
385
+ "The portion of timesteps to bias. Defaults to 0.25, which 25% of timesteps will be biased."
386
+ " A value of 0.5 will bias one half of the timesteps. The value provided for `--timestep_bias_strategy` determines"
387
+ " whether the biased portions are in the earlier or later timesteps."
388
+ ),
389
+ )
390
+ parser.add_argument(
391
+ "--snr_gamma",
392
+ type=float,
393
+ default=None,
394
+ help="SNR weighting gamma to be used if rebalancing the loss. Recommended value is 5.0. "
395
+ "More details here: https://arxiv.org/abs/2303.09556.",
396
+ )
397
+ parser.add_argument("--use_ema", action="store_true", help="Whether to use EMA model.")
398
+ parser.add_argument(
399
+ "--allow_tf32",
400
+ action="store_true",
401
+ help=(
402
+ "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see"
403
+ " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices"
404
+ ),
405
+ )
406
+ parser.add_argument(
407
+ "--dataloader_num_workers",
408
+ type=int,
409
+ default=0,
410
+ help=(
411
+ "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process."
412
+ ),
413
+ )
414
+ parser.add_argument(
415
+ "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes."
416
+ )
417
+ parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
418
+ parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
419
+ parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
420
+ parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
421
+ parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.")
422
+ parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
423
+ parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
424
+ parser.add_argument(
425
+ "--prediction_type",
426
+ type=str,
427
+ default=None,
428
+ help="The prediction_type that shall be used for training. Choose between 'epsilon' or 'v_prediction' or leave `None`. If left to `None` the default prediction type of the scheduler: `noise_scheduler.config.prediction_type` is chosen.",
429
+ )
430
+ parser.add_argument(
431
+ "--hub_model_id",
432
+ type=str,
433
+ default=None,
434
+ help="The name of the repository to keep in sync with the local `output_dir`.",
435
+ )
436
+ parser.add_argument(
437
+ "--logging_dir",
438
+ type=str,
439
+ default="logs",
440
+ help=(
441
+ "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
442
+ " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
443
+ ),
444
+ )
445
+ parser.add_argument(
446
+ "--report_to",
447
+ type=str,
448
+ default="tensorboard",
449
+ help=(
450
+ 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`'
451
+ ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.'
452
+ ),
453
+ )
454
+ parser.add_argument(
455
+ "--mixed_precision",
456
+ type=str,
457
+ default=None,
458
+ choices=["no", "fp16", "bf16"],
459
+ help=(
460
+ "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >="
461
+ " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the"
462
+ " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config."
463
+ ),
464
+ )
465
+ parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
466
+ parser.add_argument(
467
+ "--enable_npu_flash_attention", action="store_true", help="Whether or not to use npu flash attention."
468
+ )
469
+ parser.add_argument(
470
+ "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers."
471
+ )
472
+ parser.add_argument("--noise_offset", type=float, default=0, help="The scale of noise offset.")
473
+
474
+ if input_args is not None:
475
+ args = parser.parse_args(input_args)
476
+ else:
477
+ args = parser.parse_args()
478
+
479
+ env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
480
+ if env_local_rank != -1 and env_local_rank != args.local_rank:
481
+ args.local_rank = env_local_rank
482
+
483
+ # Sanity checks
484
+ if args.dataset_name is None and args.train_data_dir is None:
485
+ raise ValueError("Need either a dataset name or a training folder.")
486
+ if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
487
+ raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
488
+
489
+ return args
490
+
491
+
492
+ # Adapted from pipelines.StableDiffusionXLPipeline.encode_prompt
493
+ def encode_prompt(batch, text_encoders, tokenizers, proportion_empty_prompts, caption_column, is_train=True):
494
+ prompt_embeds_list = []
495
+ prompt_batch = batch[caption_column]
496
+
497
+ captions = []
498
+ for caption in prompt_batch:
499
+ if random.random() < proportion_empty_prompts:
500
+ captions.append("")
501
+ elif isinstance(caption, str):
502
+ captions.append(caption)
503
+ elif isinstance(caption, (list, np.ndarray)):
504
+ # take a random caption if there are multiple
505
+ captions.append(random.choice(caption) if is_train else caption[0])
506
+
507
+ with torch.no_grad():
508
+ for tokenizer, text_encoder in zip(tokenizers, text_encoders):
509
+ text_inputs = tokenizer(
510
+ captions,
511
+ padding="max_length",
512
+ max_length=tokenizer.model_max_length,
513
+ truncation=True,
514
+ return_tensors="pt",
515
+ )
516
+ text_input_ids = text_inputs.input_ids
517
+ prompt_embeds = text_encoder(
518
+ text_input_ids.to(text_encoder.device),
519
+ output_hidden_states=True,
520
+ return_dict=False,
521
+ )
522
+
523
+ # We are only ALWAYS interested in the pooled output of the final text encoder
524
+ pooled_prompt_embeds = prompt_embeds[0]
525
+ prompt_embeds = prompt_embeds[-1][-2]
526
+ bs_embed, seq_len, _ = prompt_embeds.shape
527
+ prompt_embeds = prompt_embeds.view(bs_embed, seq_len, -1)
528
+ prompt_embeds_list.append(prompt_embeds)
529
+
530
+ prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
531
+ pooled_prompt_embeds = pooled_prompt_embeds.view(bs_embed, -1)
532
+ return {"prompt_embeds": prompt_embeds.cpu(), "pooled_prompt_embeds": pooled_prompt_embeds.cpu()}
533
+
534
+
535
+ def compute_vae_encodings(batch, vae):
536
+ images = batch.pop("pixel_values")
537
+ pixel_values = torch.stack(list(images))
538
+ pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float()
539
+ pixel_values = pixel_values.to(vae.device, dtype=vae.dtype)
540
+
541
+ with torch.no_grad():
542
+ model_input = vae.encode(pixel_values).latent_dist.sample()
543
+ model_input = model_input * vae.config.scaling_factor
544
+
545
+ # There might have slightly performance improvement
546
+ # by changing model_input.cpu() to accelerator.gather(model_input)
547
+ return {"model_input": model_input.cpu()}
548
+
549
+
550
+ def generate_timestep_weights(args, num_timesteps):
551
+ weights = torch.ones(num_timesteps)
552
+
553
+ # Determine the indices to bias
554
+ num_to_bias = int(args.timestep_bias_portion * num_timesteps)
555
+
556
+ if args.timestep_bias_strategy == "later":
557
+ bias_indices = slice(-num_to_bias, None)
558
+ elif args.timestep_bias_strategy == "earlier":
559
+ bias_indices = slice(0, num_to_bias)
560
+ elif args.timestep_bias_strategy == "range":
561
+ # Out of the possible 1000 timesteps, we might want to focus on eg. 200-500.
562
+ range_begin = args.timestep_bias_begin
563
+ range_end = args.timestep_bias_end
564
+ if range_begin < 0:
565
+ raise ValueError(
566
+ "When using the range strategy for timestep bias, you must provide a beginning timestep greater or equal to zero."
567
+ )
568
+ if range_end > num_timesteps:
569
+ raise ValueError(
570
+ "When using the range strategy for timestep bias, you must provide an ending timestep smaller than the number of timesteps."
571
+ )
572
+ bias_indices = slice(range_begin, range_end)
573
+ else: # 'none' or any other string
574
+ return weights
575
+ if args.timestep_bias_multiplier <= 0:
576
+ return ValueError(
577
+ "The parameter --timestep_bias_multiplier is not intended to be used to disable the training of specific timesteps."
578
+ " If it was intended to disable timestep bias, use `--timestep_bias_strategy none` instead."
579
+ " A timestep bias multiplier less than or equal to 0 is not allowed."
580
+ )
581
+
582
+ # Apply the bias
583
+ weights[bias_indices] *= args.timestep_bias_multiplier
584
+
585
+ # Normalize
586
+ weights /= weights.sum()
587
+
588
+ return weights
589
+
590
+
591
+ def main(args):
592
+ if args.report_to == "wandb" and args.hub_token is not None:
593
+ raise ValueError(
594
+ "You cannot use both --report_to=wandb and --hub_token due to a security risk of exposing your token."
595
+ " Please use `huggingface-cli login` to authenticate with the Hub."
596
+ )
597
+
598
+ logging_dir = Path(args.output_dir, args.logging_dir)
599
+
600
+ accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
601
+
602
+ if torch.backends.mps.is_available() and args.mixed_precision == "bf16":
603
+ # due to pytorch#99272, MPS does not yet support bfloat16.
604
+ raise ValueError(
605
+ "Mixed precision training with bfloat16 is not supported on MPS. Please use fp16 (recommended) or fp32 instead."
606
+ )
607
+
608
+ accelerator = Accelerator(
609
+ gradient_accumulation_steps=args.gradient_accumulation_steps,
610
+ mixed_precision=args.mixed_precision,
611
+ log_with=args.report_to,
612
+ project_config=accelerator_project_config,
613
+ )
614
+
615
+ # Disable AMP for MPS.
616
+ if torch.backends.mps.is_available():
617
+ accelerator.native_amp = False
618
+
619
+ if args.report_to == "wandb":
620
+ if not is_wandb_available():
621
+ raise ImportError("Make sure to install wandb if you want to use it for logging during training.")
622
+ import wandb
623
+
624
+ # Make one log on every process with the configuration for debugging.
625
+ logging.basicConfig(
626
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
627
+ datefmt="%m/%d/%Y %H:%M:%S",
628
+ level=logging.INFO,
629
+ )
630
+ logger.info(accelerator.state, main_process_only=False)
631
+ if accelerator.is_local_main_process:
632
+ datasets.utils.logging.set_verbosity_warning()
633
+ transformers.utils.logging.set_verbosity_warning()
634
+ diffusers.utils.logging.set_verbosity_info()
635
+ else:
636
+ datasets.utils.logging.set_verbosity_error()
637
+ transformers.utils.logging.set_verbosity_error()
638
+ diffusers.utils.logging.set_verbosity_error()
639
+
640
+ # If passed along, set the training seed now.
641
+ if args.seed is not None:
642
+ set_seed(args.seed)
643
+
644
+ # Handle the repository creation
645
+ if accelerator.is_main_process:
646
+ if args.output_dir is not None:
647
+ os.makedirs(args.output_dir, exist_ok=True)
648
+
649
+ if args.push_to_hub:
650
+ repo_id = create_repo(
651
+ repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
652
+ ).repo_id
653
+
654
+ # Load the tokenizers
655
+ tokenizer_one = AutoTokenizer.from_pretrained(
656
+ args.pretrained_model_name_or_path,
657
+ subfolder="tokenizer",
658
+ revision=args.revision,
659
+ use_fast=False,
660
+ )
661
+ tokenizer_two = AutoTokenizer.from_pretrained(
662
+ args.pretrained_model_name_or_path,
663
+ subfolder="tokenizer_2",
664
+ revision=args.revision,
665
+ use_fast=False,
666
+ )
667
+
668
+ # import correct text encoder classes
669
+ text_encoder_cls_one = import_model_class_from_model_name_or_path(
670
+ args.pretrained_model_name_or_path, args.revision
671
+ )
672
+ text_encoder_cls_two = import_model_class_from_model_name_or_path(
673
+ args.pretrained_model_name_or_path, args.revision, subfolder="text_encoder_2"
674
+ )
675
+
676
+ # Load scheduler and models
677
+ noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
678
+ # Check for terminal SNR in combination with SNR Gamma
679
+ text_encoder_one = text_encoder_cls_one.from_pretrained(
680
+ args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision, variant=args.variant
681
+ )
682
+ text_encoder_two = text_encoder_cls_two.from_pretrained(
683
+ args.pretrained_model_name_or_path, subfolder="text_encoder_2", revision=args.revision, variant=args.variant
684
+ )
685
+ vae_path = (
686
+ args.pretrained_model_name_or_path
687
+ if args.pretrained_vae_model_name_or_path is None
688
+ else args.pretrained_vae_model_name_or_path
689
+ )
690
+ vae = AutoencoderKL.from_pretrained(
691
+ vae_path,
692
+ subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
693
+ revision=args.revision,
694
+ variant=args.variant,
695
+ )
696
+ unet = UNet2DConditionModel.from_pretrained(
697
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
698
+ )
699
+
700
+ # Freeze vae and text encoders.
701
+ vae.requires_grad_(False)
702
+ text_encoder_one.requires_grad_(False)
703
+ text_encoder_two.requires_grad_(False)
704
+ # Set unet as trainable.
705
+ unet.train()
706
+
707
+ # For mixed precision training we cast all non-trainable weights to half-precision
708
+ # as these weights are only used for inference, keeping weights in full precision is not required.
709
+ weight_dtype = torch.float32
710
+ if accelerator.mixed_precision == "fp16":
711
+ weight_dtype = torch.float16
712
+ elif accelerator.mixed_precision == "bf16":
713
+ weight_dtype = torch.bfloat16
714
+
715
+ # Move unet, vae and text_encoder to device and cast to weight_dtype
716
+ # The VAE is in float32 to avoid NaN losses.
717
+ vae.to(accelerator.device, dtype=torch.float32)
718
+ text_encoder_one.to(accelerator.device, dtype=weight_dtype)
719
+ text_encoder_two.to(accelerator.device, dtype=weight_dtype)
720
+
721
+ # Create EMA for the unet.
722
+ if args.use_ema:
723
+ ema_unet = UNet2DConditionModel.from_pretrained(
724
+ args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, variant=args.variant
725
+ )
726
+ ema_unet = EMAModel(ema_unet.parameters(), model_cls=UNet2DConditionModel, model_config=ema_unet.config)
727
+ if args.enable_npu_flash_attention:
728
+ if is_torch_npu_available():
729
+ logger.info("npu flash attention enabled.")
730
+ unet.enable_npu_flash_attention()
731
+ else:
732
+ raise ValueError("npu flash attention requires torch_npu extensions and is supported only on npu devices.")
733
+ if args.enable_xformers_memory_efficient_attention:
734
+ if is_xformers_available():
735
+ import xformers
736
+
737
+ xformers_version = version.parse(xformers.__version__)
738
+ if xformers_version == version.parse("0.0.16"):
739
+ logger.warning(
740
+ "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details."
741
+ )
742
+ unet.enable_xformers_memory_efficient_attention()
743
+ else:
744
+ raise ValueError("xformers is not available. Make sure it is installed correctly")
745
+
746
+ # `accelerate` 0.16.0 will have better support for customized saving
747
+ if version.parse(accelerate.__version__) >= version.parse("0.16.0"):
748
+ # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format
749
+ def save_model_hook(models, weights, output_dir):
750
+ if accelerator.is_main_process:
751
+ if args.use_ema:
752
+ ema_unet.save_pretrained(os.path.join(output_dir, "unet_ema"))
753
+
754
+ for i, model in enumerate(models):
755
+ model.save_pretrained(os.path.join(output_dir, "unet"))
756
+
757
+ # make sure to pop weight so that corresponding model is not saved again
758
+ if weights:
759
+ weights.pop()
760
+
761
+ def load_model_hook(models, input_dir):
762
+ if args.use_ema:
763
+ load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DConditionModel)
764
+ ema_unet.load_state_dict(load_model.state_dict())
765
+ ema_unet.to(accelerator.device)
766
+ del load_model
767
+
768
+ for _ in range(len(models)):
769
+ # pop models so that they are not loaded again
770
+ model = models.pop()
771
+
772
+ # load diffusers style into model
773
+ load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
774
+ model.register_to_config(**load_model.config)
775
+
776
+ model.load_state_dict(load_model.state_dict())
777
+ del load_model
778
+
779
+ accelerator.register_save_state_pre_hook(save_model_hook)
780
+ accelerator.register_load_state_pre_hook(load_model_hook)
781
+
782
+ if args.gradient_checkpointing:
783
+ unet.enable_gradient_checkpointing()
784
+
785
+ # Enable TF32 for faster training on Ampere GPUs,
786
+ # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices
787
+ if args.allow_tf32:
788
+ torch.backends.cuda.matmul.allow_tf32 = True
789
+
790
+ if args.scale_lr:
791
+ args.learning_rate = (
792
+ args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
793
+ )
794
+
795
+ # Use 8-bit Adam for lower memory usage or to fine-tune the model in 16GB GPUs
796
+ if args.use_8bit_adam:
797
+ try:
798
+ import bitsandbytes as bnb
799
+ except ImportError:
800
+ raise ImportError(
801
+ "To use 8-bit Adam, please install the bitsandbytes library: `pip install bitsandbytes`."
802
+ )
803
+
804
+ optimizer_class = bnb.optim.AdamW8bit
805
+ else:
806
+ optimizer_class = torch.optim.AdamW
807
+
808
+ # Optimizer creation
809
+ params_to_optimize = unet.parameters()
810
+ optimizer = optimizer_class(
811
+ params_to_optimize,
812
+ lr=args.learning_rate,
813
+ betas=(args.adam_beta1, args.adam_beta2),
814
+ weight_decay=args.adam_weight_decay,
815
+ eps=args.adam_epsilon,
816
+ )
817
+
818
+ # Get the datasets: you can either provide your own training and evaluation files (see below)
819
+ # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub).
820
+
821
+ # In distributed training, the load_dataset function guarantees that only one local process can concurrently
822
+ # download the dataset.
823
+ if args.dataset_name is not None:
824
+ # Downloading and loading a dataset from the hub.
825
+ dataset = load_dataset(
826
+ args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir
827
+ )
828
+ else:
829
+ data_files = {}
830
+ if args.train_data_dir is not None:
831
+ data_files["train"] = os.path.join(args.train_data_dir, "**")
832
+ dataset = load_dataset(
833
+ "imagefolder",
834
+ data_files=data_files,
835
+ cache_dir=args.cache_dir,
836
+ )
837
+ # See more about loading custom images at
838
+ # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder
839
+
840
+ # Preprocessing the datasets.
841
+ # We need to tokenize inputs and targets.
842
+ column_names = dataset["train"].column_names
843
+
844
+ # 6. Get the column names for input/target.
845
+ dataset_columns = DATASET_NAME_MAPPING.get(args.dataset_name, None)
846
+ if args.image_column is None:
847
+ image_column = dataset_columns[0] if dataset_columns is not None else column_names[0]
848
+ else:
849
+ image_column = args.image_column
850
+ if image_column not in column_names:
851
+ raise ValueError(
852
+ f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}"
853
+ )
854
+ if args.caption_column is None:
855
+ caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1]
856
+ else:
857
+ caption_column = args.caption_column
858
+ if caption_column not in column_names:
859
+ raise ValueError(
860
+ f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}"
861
+ )
862
+
863
+ # Preprocessing the datasets.
864
+ train_resize = transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR)
865
+ train_crop = transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution)
866
+ train_flip = transforms.RandomHorizontalFlip(p=1.0)
867
+ train_transforms = transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
868
+
869
+ def preprocess_train(examples):
870
+ images = [image.convert("RGB") for image in examples[image_column]]
871
+ # image aug
872
+ original_sizes = []
873
+ all_images = []
874
+ crop_top_lefts = []
875
+ for image in images:
876
+ original_sizes.append((image.height, image.width))
877
+ image = train_resize(image)
878
+ if args.random_flip and random.random() < 0.5:
879
+ # flip
880
+ image = train_flip(image)
881
+ if args.center_crop:
882
+ y1 = max(0, int(round((image.height - args.resolution) / 2.0)))
883
+ x1 = max(0, int(round((image.width - args.resolution) / 2.0)))
884
+ image = train_crop(image)
885
+ else:
886
+ y1, x1, h, w = train_crop.get_params(image, (args.resolution, args.resolution))
887
+ image = crop(image, y1, x1, h, w)
888
+ crop_top_left = (y1, x1)
889
+ crop_top_lefts.append(crop_top_left)
890
+ image = train_transforms(image)
891
+ all_images.append(image)
892
+
893
+ examples["original_sizes"] = original_sizes
894
+ examples["crop_top_lefts"] = crop_top_lefts
895
+ examples["pixel_values"] = all_images
896
+ return examples
897
+
898
+ with accelerator.main_process_first():
899
+ if args.max_train_samples is not None:
900
+ dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
901
+ # Set the training transforms
902
+ train_dataset = dataset["train"].with_transform(preprocess_train)
903
+
904
+ # Let's first compute all the embeddings so that we can free up the text encoders
905
+ # from memory. We will pre-compute the VAE encodings too.
906
+ text_encoders = [text_encoder_one, text_encoder_two]
907
+ tokenizers = [tokenizer_one, tokenizer_two]
908
+ compute_embeddings_fn = functools.partial(
909
+ encode_prompt,
910
+ text_encoders=text_encoders,
911
+ tokenizers=tokenizers,
912
+ proportion_empty_prompts=args.proportion_empty_prompts,
913
+ caption_column=args.caption_column,
914
+ )
915
+ compute_vae_encodings_fn = functools.partial(compute_vae_encodings, vae=vae)
916
+ with accelerator.main_process_first():
917
+ from datasets.fingerprint import Hasher
918
+
919
+ # fingerprint used by the cache for the other processes to load the result
920
+ # details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401
921
+ new_fingerprint = Hasher.hash(args)
922
+ new_fingerprint_for_vae = Hasher.hash((vae_path, args))
923
+ train_dataset_with_embeddings = train_dataset.map(
924
+ compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint
925
+ )
926
+ train_dataset_with_vae = train_dataset.map(
927
+ compute_vae_encodings_fn,
928
+ batched=True,
929
+ batch_size=args.train_batch_size,
930
+ new_fingerprint=new_fingerprint_for_vae,
931
+ )
932
+ precomputed_dataset = concatenate_datasets(
933
+ [train_dataset_with_embeddings, train_dataset_with_vae.remove_columns(["image", "text"])], axis=1
934
+ )
935
+ precomputed_dataset = precomputed_dataset.with_transform(preprocess_train)
936
+
937
+ del compute_vae_encodings_fn, compute_embeddings_fn, text_encoder_one, text_encoder_two
938
+ del text_encoders, tokenizers, vae
939
+ gc.collect()
940
+ if is_torch_npu_available():
941
+ torch_npu.npu.empty_cache()
942
+ elif torch.cuda.is_available():
943
+ torch.cuda.empty_cache()
944
+
945
+ def collate_fn(examples):
946
+ model_input = torch.stack([torch.tensor(example["model_input"]) for example in examples])
947
+ original_sizes = [example["original_sizes"] for example in examples]
948
+ crop_top_lefts = [example["crop_top_lefts"] for example in examples]
949
+ prompt_embeds = torch.stack([torch.tensor(example["prompt_embeds"]) for example in examples])
950
+ pooled_prompt_embeds = torch.stack([torch.tensor(example["pooled_prompt_embeds"]) for example in examples])
951
+
952
+ return {
953
+ "model_input": model_input,
954
+ "prompt_embeds": prompt_embeds,
955
+ "pooled_prompt_embeds": pooled_prompt_embeds,
956
+ "original_sizes": original_sizes,
957
+ "crop_top_lefts": crop_top_lefts,
958
+ }
959
+
960
+ # DataLoaders creation:
961
+ train_dataloader = torch.utils.data.DataLoader(
962
+ precomputed_dataset,
963
+ shuffle=True,
964
+ collate_fn=collate_fn,
965
+ batch_size=args.train_batch_size,
966
+ num_workers=args.dataloader_num_workers,
967
+ )
968
+
969
+ # Scheduler and math around the number of training steps.
970
+ overrode_max_train_steps = False
971
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
972
+ if args.max_train_steps is None:
973
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
974
+ overrode_max_train_steps = True
975
+
976
+ lr_scheduler = get_scheduler(
977
+ args.lr_scheduler,
978
+ optimizer=optimizer,
979
+ num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps,
980
+ num_training_steps=args.max_train_steps * args.gradient_accumulation_steps,
981
+ )
982
+
983
+ # Prepare everything with our `accelerator`.
984
+ unet, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
985
+ unet, optimizer, train_dataloader, lr_scheduler
986
+ )
987
+
988
+ if args.use_ema:
989
+ ema_unet.to(accelerator.device)
990
+
991
+ # We need to recalculate our total training steps as the size of the training dataloader may have changed.
992
+ num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
993
+ if overrode_max_train_steps:
994
+ args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
995
+ # Afterwards we recalculate our number of training epochs
996
+ args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
997
+
998
+ # We need to initialize the trackers we use, and also store our configuration.
999
+ # The trackers initializes automatically on the main process.
1000
+ if accelerator.is_main_process:
1001
+ accelerator.init_trackers("text2image-fine-tune-sdxl", config=vars(args))
1002
+
1003
+ # Function for unwrapping if torch.compile() was used in accelerate.
1004
+ def unwrap_model(model):
1005
+ model = accelerator.unwrap_model(model)
1006
+ model = model._orig_mod if is_compiled_module(model) else model
1007
+ return model
1008
+
1009
+ if torch.backends.mps.is_available() or "playground" in args.pretrained_model_name_or_path:
1010
+ autocast_ctx = nullcontext()
1011
+ else:
1012
+ autocast_ctx = torch.autocast(accelerator.device.type)
1013
+
1014
+ # Train!
1015
+ total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
1016
+
1017
+ logger.info("***** Running training *****")
1018
+ logger.info(f" Num examples = {len(precomputed_dataset)}")
1019
+ logger.info(f" Num Epochs = {args.num_train_epochs}")
1020
+ logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
1021
+ logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
1022
+ logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
1023
+ logger.info(f" Total optimization steps = {args.max_train_steps}")
1024
+ global_step = 0
1025
+ first_epoch = 0
1026
+
1027
+ # Potentially load in the weights and states from a previous save
1028
+ if args.resume_from_checkpoint:
1029
+ if args.resume_from_checkpoint != "latest":
1030
+ path = os.path.basename(args.resume_from_checkpoint)
1031
+ else:
1032
+ # Get the most recent checkpoint
1033
+ dirs = os.listdir(args.output_dir)
1034
+ dirs = [d for d in dirs if d.startswith("checkpoint")]
1035
+ dirs = sorted(dirs, key=lambda x: int(x.split("-")[1]))
1036
+ path = dirs[-1] if len(dirs) > 0 else None
1037
+
1038
+ if path is None:
1039
+ accelerator.print(
1040
+ f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run."
1041
+ )
1042
+ args.resume_from_checkpoint = None
1043
+ initial_global_step = 0
1044
+ else:
1045
+ accelerator.print(f"Resuming from checkpoint {path}")
1046
+ accelerator.load_state(os.path.join(args.output_dir, path))
1047
+ global_step = int(path.split("-")[1])
1048
+
1049
+ initial_global_step = global_step
1050
+ first_epoch = global_step // num_update_steps_per_epoch
1051
+
1052
+ else:
1053
+ initial_global_step = 0
1054
+
1055
+ progress_bar = tqdm(
1056
+ range(0, args.max_train_steps),
1057
+ initial=initial_global_step,
1058
+ desc="Steps",
1059
+ # Only show the progress bar once on each machine.
1060
+ disable=not accelerator.is_local_main_process,
1061
+ )
1062
+
1063
+ for epoch in range(first_epoch, args.num_train_epochs):
1064
+ train_loss = 0.0
1065
+ for step, batch in enumerate(train_dataloader):
1066
+ with accelerator.accumulate(unet):
1067
+ # Sample noise that we'll add to the latents
1068
+ model_input = batch["model_input"].to(accelerator.device)
1069
+ noise = torch.randn_like(model_input)
1070
+ if args.noise_offset:
1071
+ # https://www.crosslabs.org//blog/diffusion-with-offset-noise
1072
+ noise += args.noise_offset * torch.randn(
1073
+ (model_input.shape[0], model_input.shape[1], 1, 1), device=model_input.device
1074
+ )
1075
+
1076
+ bsz = model_input.shape[0]
1077
+ if args.timestep_bias_strategy == "none":
1078
+ # Sample a random timestep for each image without bias.
1079
+ timesteps = torch.randint(
1080
+ 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=model_input.device
1081
+ )
1082
+ else:
1083
+ # Sample a random timestep for each image, potentially biased by the timestep weights.
1084
+ # Biasing the timestep weights allows us to spend less time training irrelevant timesteps.
1085
+ weights = generate_timestep_weights(args, noise_scheduler.config.num_train_timesteps).to(
1086
+ model_input.device
1087
+ )
1088
+ timesteps = torch.multinomial(weights, bsz, replacement=True).long()
1089
+
1090
+ # Add noise to the model input according to the noise magnitude at each timestep
1091
+ # (this is the forward diffusion process)
1092
+ noisy_model_input = noise_scheduler.add_noise(model_input, noise, timesteps).to(dtype=weight_dtype)
1093
+
1094
+ # time ids
1095
+ def compute_time_ids(original_size, crops_coords_top_left):
1096
+ # Adapted from pipeline.StableDiffusionXLPipeline._get_add_time_ids
1097
+ target_size = (args.resolution, args.resolution)
1098
+ add_time_ids = list(original_size + crops_coords_top_left + target_size)
1099
+ add_time_ids = torch.tensor([add_time_ids], device=accelerator.device, dtype=weight_dtype)
1100
+ return add_time_ids
1101
+
1102
+ add_time_ids = torch.cat(
1103
+ [compute_time_ids(s, c) for s, c in zip(batch["original_sizes"], batch["crop_top_lefts"])]
1104
+ )
1105
+
1106
+ # Predict the noise residual
1107
+ unet_added_conditions = {"time_ids": add_time_ids}
1108
+ prompt_embeds = batch["prompt_embeds"].to(accelerator.device, dtype=weight_dtype)
1109
+ pooled_prompt_embeds = batch["pooled_prompt_embeds"].to(accelerator.device)
1110
+ unet_added_conditions.update({"text_embeds": pooled_prompt_embeds})
1111
+ model_pred = unet(
1112
+ noisy_model_input,
1113
+ timesteps,
1114
+ prompt_embeds,
1115
+ added_cond_kwargs=unet_added_conditions,
1116
+ return_dict=False,
1117
+ )[0]
1118
+
1119
+ # Get the target for loss depending on the prediction type
1120
+ if args.prediction_type is not None:
1121
+ # set prediction_type of scheduler if defined
1122
+ noise_scheduler.register_to_config(prediction_type=args.prediction_type)
1123
+
1124
+ if noise_scheduler.config.prediction_type == "epsilon":
1125
+ target = noise
1126
+ elif noise_scheduler.config.prediction_type == "v_prediction":
1127
+ target = noise_scheduler.get_velocity(model_input, noise, timesteps)
1128
+ elif noise_scheduler.config.prediction_type == "sample":
1129
+ # We set the target to latents here, but the model_pred will return the noise sample prediction.
1130
+ target = model_input
1131
+ # We will have to subtract the noise residual from the prediction to get the target sample.
1132
+ model_pred = model_pred - noise
1133
+ else:
1134
+ raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
1135
+
1136
+ if args.snr_gamma is None:
1137
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean")
1138
+ else:
1139
+ # Compute loss-weights as per Section 3.4 of https://arxiv.org/abs/2303.09556.
1140
+ # Since we predict the noise instead of x_0, the original formulation is slightly changed.
1141
+ # This is discussed in Section 4.2 of the same paper.
1142
+ snr = compute_snr(noise_scheduler, timesteps)
1143
+ mse_loss_weights = torch.stack([snr, args.snr_gamma * torch.ones_like(timesteps)], dim=1).min(
1144
+ dim=1
1145
+ )[0]
1146
+ if noise_scheduler.config.prediction_type == "epsilon":
1147
+ mse_loss_weights = mse_loss_weights / snr
1148
+ elif noise_scheduler.config.prediction_type == "v_prediction":
1149
+ mse_loss_weights = mse_loss_weights / (snr + 1)
1150
+
1151
+ loss = F.mse_loss(model_pred.float(), target.float(), reduction="none")
1152
+ loss = loss.mean(dim=list(range(1, len(loss.shape)))) * mse_loss_weights
1153
+ loss = loss.mean()
1154
+
1155
+ # Gather the losses across all processes for logging (if we use distributed training).
1156
+ avg_loss = accelerator.gather(loss.repeat(args.train_batch_size)).mean()
1157
+ train_loss += avg_loss.item() / args.gradient_accumulation_steps
1158
+
1159
+ # Backpropagate
1160
+ accelerator.backward(loss)
1161
+ if accelerator.sync_gradients:
1162
+ params_to_clip = unet.parameters()
1163
+ accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
1164
+ optimizer.step()
1165
+ lr_scheduler.step()
1166
+ optimizer.zero_grad()
1167
+
1168
+ # Checks if the accelerator has performed an optimization step behind the scenes
1169
+ if accelerator.sync_gradients:
1170
+ if args.use_ema:
1171
+ ema_unet.step(unet.parameters())
1172
+ progress_bar.update(1)
1173
+ global_step += 1
1174
+ accelerator.log({"train_loss": train_loss}, step=global_step)
1175
+ train_loss = 0.0
1176
+
1177
+ # DeepSpeed requires saving weights on every device; saving weights only on the main process would cause issues.
1178
+ if accelerator.distributed_type == DistributedType.DEEPSPEED or accelerator.is_main_process:
1179
+ if global_step % args.checkpointing_steps == 0:
1180
+ # _before_ saving state, check if this save would set us over the `checkpoints_total_limit`
1181
+ if args.checkpoints_total_limit is not None:
1182
+ checkpoints = os.listdir(args.output_dir)
1183
+ checkpoints = [d for d in checkpoints if d.startswith("checkpoint")]
1184
+ checkpoints = sorted(checkpoints, key=lambda x: int(x.split("-")[1]))
1185
+
1186
+ # before we save the new checkpoint, we need to have at _most_ `checkpoints_total_limit - 1` checkpoints
1187
+ if len(checkpoints) >= args.checkpoints_total_limit:
1188
+ num_to_remove = len(checkpoints) - args.checkpoints_total_limit + 1
1189
+ removing_checkpoints = checkpoints[0:num_to_remove]
1190
+
1191
+ logger.info(
1192
+ f"{len(checkpoints)} checkpoints already exist, removing {len(removing_checkpoints)} checkpoints"
1193
+ )
1194
+ logger.info(f"removing checkpoints: {', '.join(removing_checkpoints)}")
1195
+
1196
+ for removing_checkpoint in removing_checkpoints:
1197
+ removing_checkpoint = os.path.join(args.output_dir, removing_checkpoint)
1198
+ shutil.rmtree(removing_checkpoint)
1199
+
1200
+ save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}")
1201
+ accelerator.save_state(save_path)
1202
+ logger.info(f"Saved state to {save_path}")
1203
+
1204
+ logs = {"step_loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
1205
+ progress_bar.set_postfix(**logs)
1206
+
1207
+ if global_step >= args.max_train_steps:
1208
+ break
1209
+
1210
+ if accelerator.is_main_process:
1211
+ if args.validation_prompt is not None and epoch % args.validation_epochs == 0:
1212
+ logger.info(
1213
+ f"Running validation... \n Generating {args.num_validation_images} images with prompt:"
1214
+ f" {args.validation_prompt}."
1215
+ )
1216
+ if args.use_ema:
1217
+ # Store the UNet parameters temporarily and load the EMA parameters to perform inference.
1218
+ ema_unet.store(unet.parameters())
1219
+ ema_unet.copy_to(unet.parameters())
1220
+
1221
+ # create pipeline
1222
+ vae = AutoencoderKL.from_pretrained(
1223
+ vae_path,
1224
+ subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
1225
+ revision=args.revision,
1226
+ variant=args.variant,
1227
+ )
1228
+ pipeline = StableDiffusionXLPipeline.from_pretrained(
1229
+ args.pretrained_model_name_or_path,
1230
+ vae=vae,
1231
+ unet=accelerator.unwrap_model(unet),
1232
+ revision=args.revision,
1233
+ variant=args.variant,
1234
+ torch_dtype=weight_dtype,
1235
+ )
1236
+ if args.prediction_type is not None:
1237
+ scheduler_args = {"prediction_type": args.prediction_type}
1238
+ pipeline.scheduler = pipeline.scheduler.from_config(pipeline.scheduler.config, **scheduler_args)
1239
+
1240
+ pipeline = pipeline.to(accelerator.device)
1241
+ pipeline.set_progress_bar_config(disable=True)
1242
+
1243
+ # run inference
1244
+ generator = (
1245
+ torch.Generator(device=accelerator.device).manual_seed(args.seed)
1246
+ if args.seed is not None
1247
+ else None
1248
+ )
1249
+ pipeline_args = {"prompt": args.validation_prompt}
1250
+
1251
+ with autocast_ctx:
1252
+ images = [
1253
+ pipeline(**pipeline_args, generator=generator, num_inference_steps=25).images[0]
1254
+ for _ in range(args.num_validation_images)
1255
+ ]
1256
+
1257
+ for tracker in accelerator.trackers:
1258
+ if tracker.name == "tensorboard":
1259
+ np_images = np.stack([np.asarray(img) for img in images])
1260
+ tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC")
1261
+ if tracker.name == "wandb":
1262
+ tracker.log(
1263
+ {
1264
+ "validation": [
1265
+ wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
1266
+ for i, image in enumerate(images)
1267
+ ]
1268
+ }
1269
+ )
1270
+
1271
+ del pipeline
1272
+ if is_torch_npu_available():
1273
+ torch_npu.npu.empty_cache()
1274
+ elif torch.cuda.is_available():
1275
+ torch.cuda.empty_cache()
1276
+
1277
+ if args.use_ema:
1278
+ # Switch back to the original UNet parameters.
1279
+ ema_unet.restore(unet.parameters())
1280
+
1281
+ accelerator.wait_for_everyone()
1282
+ if accelerator.is_main_process:
1283
+ unet = unwrap_model(unet)
1284
+ if args.use_ema:
1285
+ ema_unet.copy_to(unet.parameters())
1286
+
1287
+ # Serialize pipeline.
1288
+ vae = AutoencoderKL.from_pretrained(
1289
+ vae_path,
1290
+ subfolder="vae" if args.pretrained_vae_model_name_or_path is None else None,
1291
+ revision=args.revision,
1292
+ variant=args.variant,
1293
+ torch_dtype=weight_dtype,
1294
+ )
1295
+ pipeline = StableDiffusionXLPipeline.from_pretrained(
1296
+ args.pretrained_model_name_or_path,
1297
+ unet=unet,
1298
+ vae=vae,
1299
+ revision=args.revision,
1300
+ variant=args.variant,
1301
+ torch_dtype=weight_dtype,
1302
+ )
1303
+ if args.prediction_type is not None:
1304
+ scheduler_args = {"prediction_type": args.prediction_type}
1305
+ pipeline.scheduler = pipeline.scheduler.from_config(pipeline.scheduler.config, **scheduler_args)
1306
+ pipeline.save_pretrained(args.output_dir)
1307
+
1308
+ # run inference
1309
+ images = []
1310
+ if args.validation_prompt and args.num_validation_images > 0:
1311
+ pipeline = pipeline.to(accelerator.device)
1312
+ generator = (
1313
+ torch.Generator(device=accelerator.device).manual_seed(args.seed) if args.seed is not None else None
1314
+ )
1315
+
1316
+ with autocast_ctx:
1317
+ images = [
1318
+ pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0]
1319
+ for _ in range(args.num_validation_images)
1320
+ ]
1321
+
1322
+ for tracker in accelerator.trackers:
1323
+ if tracker.name == "tensorboard":
1324
+ np_images = np.stack([np.asarray(img) for img in images])
1325
+ tracker.writer.add_images("test", np_images, epoch, dataformats="NHWC")
1326
+ if tracker.name == "wandb":
1327
+ tracker.log(
1328
+ {
1329
+ "test": [
1330
+ wandb.Image(image, caption=f"{i}: {args.validation_prompt}")
1331
+ for i, image in enumerate(images)
1332
+ ]
1333
+ }
1334
+ )
1335
+
1336
+ if args.push_to_hub:
1337
+ save_model_card(
1338
+ repo_id=repo_id,
1339
+ images=images,
1340
+ validation_prompt=args.validation_prompt,
1341
+ base_model=args.pretrained_model_name_or_path,
1342
+ dataset_name=args.dataset_name,
1343
+ repo_folder=args.output_dir,
1344
+ vae_path=args.pretrained_vae_model_name_or_path,
1345
+ )
1346
+ upload_folder(
1347
+ repo_id=repo_id,
1348
+ folder_path=args.output_dir,
1349
+ commit_message="End of training",
1350
+ ignore_patterns=["step_*", "epoch_*"],
1351
+ )
1352
+
1353
+ accelerator.end_training()
1354
+
1355
+
1356
+ if __name__ == "__main__":
1357
+ args = parse_args()
1358
+ main(args)