source
stringclasses 273
values | url
stringlengths 47
172
| file_type
stringclasses 1
value | chunk
stringlengths 1
512
| chunk_id
stringlengths 5
9
|
---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | scripts to train a DeepFloyd IF model with LoRA or the full model. | 20_10_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | DeepFloyd IF uses predicted variance, but the Diffusers training scripts uses predicted error so the trained DeepFloyd IF models are switched to a fixed variance schedule. The training scripts will update the scheduler config of the fully trained model for you. However, when you load the saved LoRA weights you must also update the pipeline's scheduler config.
```py
from diffusers import DiffusionPipeline | 20_10_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0", use_safetensors=True)
pipe.load_lora_weights("<lora weights path>")
# Update scheduler config to fixed variance schedule
pipe.scheduler = pipe.scheduler.__class__.from_config(pipe.scheduler.config, variance_type="fixed_small")
```
The stage 2 model requires additional validation images to upscale. You can download and use a downsized version of the training images for this.
```py
from huggingface_hub import snapshot_download | 20_10_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | local_dir = "./dog_downsized"
snapshot_download(
"diffusers/dog-example-downsized",
local_dir=local_dir,
repo_type="dataset",
ignore_patterns=".gitattributes",
)
```
The code samples below provide a brief overview of how to train a DeepFloyd IF model with a combination of DreamBooth and LoRA. Some important parameters to note are: | 20_10_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | * `--resolution=64`, a much smaller resolution is required because DeepFloyd IF is a pixel diffusion model and to work on uncompressed pixels, the input images must be smaller
* `--pre_compute_text_embeddings`, compute the text embeddings ahead of time to save memory because the [`~transformers.T5Model`] can take up a lot of memory
* `--tokenizer_max_length=77`, you can use a longer default text length with T5 as the text encoder but the default model encoding procedure uses a shorter text length | 20_10_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | * `--text_encoder_use_attention_mask`, to pass the attention mask to the text encoder
<hfoptions id="IF-DreamBooth">
<hfoption id="Stage 1 LoRA DreamBooth">
Training stage 1 of DeepFloyd IF with LoRA and DreamBooth requires ~28GB of memory.
```bash
export MODEL_NAME="DeepFloyd/IF-I-XL-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_dog_lora" | 20_10_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | accelerate launch train_dreambooth_lora.py \
--report_to wandb \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a sks dog" \
--resolution=64 \
--train_batch_size=4 \
--gradient_accumulation_steps=1 \
--learning_rate=5e-6 \
--scale_lr \
--max_train_steps=1200 \
--validation_prompt="a sks dog" \
--validation_epochs=25 \
--checkpointing_steps=100 \
--pre_compute_text_embeddings \
--tokenizer_max_length=77 \ | 20_10_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | --validation_epochs=25 \
--checkpointing_steps=100 \
--pre_compute_text_embeddings \
--tokenizer_max_length=77 \
--text_encoder_use_attention_mask
```
</hfoption>
<hfoption id="Stage 2 LoRA DreamBooth">
For stage 2 of DeepFloyd IF with LoRA and DreamBooth, pay attention to these parameters:
* `--validation_images`, the images to upscale during validation
* `--class_labels_conditioning=timesteps`, to additionally conditional the UNet as needed in stage 2 | 20_10_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | * `--class_labels_conditioning=timesteps`, to additionally conditional the UNet as needed in stage 2
* `--learning_rate=1e-6`, a lower learning rate is used compared to stage 1
* `--resolution=256`, the expected resolution for the upscaler
```bash
export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_dog_upscale"
export VALIDATION_IMAGES="dog_downsized/image_1.png dog_downsized/image_2.png dog_downsized/image_3.png dog_downsized/image_4.png" | 20_10_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | python train_dreambooth_lora.py \
--report_to wandb \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a sks dog" \
--resolution=256 \
--train_batch_size=4 \
--gradient_accumulation_steps=1 \
--learning_rate=1e-6 \
--max_train_steps=2000 \
--validation_prompt="a sks dog" \
--validation_epochs=100 \
--checkpointing_steps=500 \
--pre_compute_text_embeddings \
--tokenizer_max_length=77 \
--text_encoder_use_attention_mask \ | 20_10_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | --checkpointing_steps=500 \
--pre_compute_text_embeddings \
--tokenizer_max_length=77 \
--text_encoder_use_attention_mask \
--validation_images $VALIDATION_IMAGES \
--class_labels_conditioning=timesteps
```
</hfoption>
<hfoption id="Stage 1 DreamBooth">
For stage 1 of DeepFloyd IF with DreamBooth, pay attention to these parameters:
* `--skip_save_text_encoder`, to skip saving the full T5 text encoder with the finetuned model | 20_10_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | * `--skip_save_text_encoder`, to skip saving the full T5 text encoder with the finetuned model
* `--use_8bit_adam`, to use 8-bit Adam optimizer to save memory due to the size of the optimizer state when training the full model
* `--learning_rate=1e-7`, a really low learning rate should be used for full model training otherwise the model quality is degraded (you can use a higher learning rate with a larger batch size) | 20_10_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | Training with 8-bit Adam and a batch size of 4, the full model can be trained with ~48GB of memory.
```bash
export MODEL_NAME="DeepFloyd/IF-I-XL-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_if" | 20_10_13 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | accelerate launch train_dreambooth.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a photo of sks dog" \
--resolution=64 \
--train_batch_size=4 \
--gradient_accumulation_steps=1 \
--learning_rate=1e-7 \
--max_train_steps=150 \
--validation_prompt "a photo of sks dog" \
--validation_steps 25 \
--text_encoder_use_attention_mask \
--tokenizer_max_length 77 \
--pre_compute_text_embeddings \
--use_8bit_adam \ | 20_10_14 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | --text_encoder_use_attention_mask \
--tokenizer_max_length 77 \
--pre_compute_text_embeddings \
--use_8bit_adam \
--set_grads_to_none \
--skip_save_text_encoder \
--push_to_hub
```
</hfoption>
<hfoption id="Stage 2 DreamBooth">
For stage 2 of DeepFloyd IF with DreamBooth, pay attention to these parameters:
* `--learning_rate=5e-6`, use a lower learning rate with a smaller effective batch size
* `--resolution=256`, the expected resolution for the upscaler | 20_10_15 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | * `--resolution=256`, the expected resolution for the upscaler
* `--train_batch_size=2` and `--gradient_accumulation_steps=6`, to effectively train on images wiht faces requires larger batch sizes
```bash
export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
export INSTANCE_DIR="dog"
export OUTPUT_DIR="dreambooth_dog_upscale"
export VALIDATION_IMAGES="dog_downsized/image_1.png dog_downsized/image_2.png dog_downsized/image_3.png dog_downsized/image_4.png" | 20_10_16 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | accelerate launch train_dreambooth.py \
--report_to wandb \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--instance_prompt="a sks dog" \
--resolution=256 \
--train_batch_size=2 \
--gradient_accumulation_steps=6 \
--learning_rate=5e-6 \
--max_train_steps=2000 \
--validation_prompt="a sks dog" \
--validation_steps=150 \
--checkpointing_steps=500 \
--pre_compute_text_embeddings \
--tokenizer_max_length=77 \
--text_encoder_use_attention_mask \ | 20_10_17 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#deepfloyd-if | .md | --checkpointing_steps=500 \
--pre_compute_text_embeddings \
--tokenizer_max_length=77 \
--text_encoder_use_attention_mask \
--validation_images $VALIDATION_IMAGES \
--class_labels_conditioning timesteps \
--push_to_hub
```
</hfoption>
</hfoptions> | 20_10_18 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#training-tips | .md | Training the DeepFloyd IF model can be challenging, but here are some tips that we've found helpful:
- LoRA is sufficient for training the stage 1 model because the model's low resolution makes representing finer details difficult regardless. | 20_11_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#training-tips | .md | - For common or simple objects, you don't necessarily need to finetune the upscaler. Make sure the prompt passed to the upscaler is adjusted to remove the new token from the instance prompt. For example, if your stage 1 prompt is "a sks dog" then your stage 2 prompt should be "a dog".
- For finer details like faces, fully training the stage 2 upscaler is better than training the stage 2 model with LoRA. It also helps to use lower learning rates with larger batch sizes. | 20_11_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#training-tips | .md | - Lower learning rates should be used to train the stage 2 model.
- The [`DDPMScheduler`] works better than the DPMSolver used in the training scripts. | 20_11_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/dreambooth.md | https://huggingface.co/docs/diffusers/en/training/dreambooth/#next-steps | .md | Congratulations on training your DreamBooth model! To learn more about how to use your new model, the following guide may be helpful:
- Learn how to [load a DreamBooth](../using-diffusers/loading_adapters) model for inference if you trained your model with LoRA. | 20_12_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 21_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 21_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | [InstructPix2Pix](https://hf.co/papers/2211.09800) is a Stable Diffusion model trained to edit images from human-provided instructions. For example, your prompt can be "turn the clouds rainy" and the model will edit the input image accordingly. This model is conditioned on the text prompt (or editing instruction) and the input image. | 21_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | This guide will explore the [train_instruct_pix2pix.py](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py) training script to help you become familiar with it, and how you can adapt it for your own use case.
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
``` | 21_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | ```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:
```bash
cd examples/instruct_pix2pix
pip install -r requirements.txt
```
<Tip> | 21_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | ```bash
cd examples/instruct_pix2pix
pip install -r requirements.txt
```
<Tip>
π€ Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the π€ Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an π€ Accelerate environment:
```bash
accelerate config
``` | 21_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | </Tip>
Initialize an π€ Accelerate environment:
```bash
accelerate config
```
To setup a default π€ Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```py
from accelerate.utils import write_basic_config | 21_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script.
<Tip> | 21_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#instructpix2pix | .md | <Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py) and let us know if you have any questions or concerns.
</Tip> | 21_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#script-parameters | .md | The training script has many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L65) function. Default values are provided for most parameters that work pretty well, but you can also set your own values in the training command if you'd like. | 21_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#script-parameters | .md | For example, to increase the resolution of the input image:
```bash
accelerate launch train_instruct_pix2pix.py \
--resolution=512 \
```
Many of the basic and important parameters are described in the [Text-to-image](text2image#script-parameters) training guide, so this guide just focuses on the relevant parameters for InstructPix2Pix:
- `--original_image_column`: the original image before the edits are made
- `--edited_image_column`: the image after the edits are made | 21_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#script-parameters | .md | - `--edited_image_column`: the image after the edits are made
- `--edit_prompt_column`: the instructions to edit the image
- `--conditioning_dropout_prob`: the dropout probability for the edited image and edit prompts during training which enables classifier-free guidance (CFG) for one or both conditioning inputs | 21_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L374) function. This is where you'll make your changes to the training script to adapt it for your own use-case. | 21_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | As with the script parameters, a walkthrough of the training script is provided in the [Text-to-image](text2image#training-script) training guide. Instead, this guide takes a look at the InstructPix2Pix relevant parts of the script. | 21_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | The script begins by modifying the [number of input channels](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L445) in the first convolutional layer of the UNet to account for InstructPix2Pix's additional conditioning image:
```py
in_channels = 8
out_channels = unet.conv_in.out_channels
unet.register_to_config(in_channels=in_channels) | 21_3_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | with torch.no_grad():
new_conv_in = nn.Conv2d(
in_channels, out_channels, unet.conv_in.kernel_size, unet.conv_in.stride, unet.conv_in.padding
)
new_conv_in.weight.zero_()
new_conv_in.weight[:, :4, :, :].copy_(unet.conv_in.weight)
unet.conv_in = new_conv_in
```
These UNet parameters are [updated](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L545C1-L551C6) by the optimizer:
```py
optimizer = optimizer_cls( | 21_3_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | ```py
optimizer = optimizer_cls(
unet.parameters(),
lr=args.learning_rate,
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay,
eps=args.adam_epsilon,
)
``` | 21_3_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | eps=args.adam_epsilon,
)
```
Next, the edited images and edit instructions are [preprocessed](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L624) and [tokenized](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L610C24-L610C24). It is important the same image transformations are applied to the original and edited images. | 21_3_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | ```py
def preprocess_train(examples):
preprocessed_images = preprocess_images(examples) | 21_3_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | original_images, edited_images = preprocessed_images.chunk(2)
original_images = original_images.reshape(-1, 3, args.resolution, args.resolution)
edited_images = edited_images.reshape(-1, 3, args.resolution, args.resolution)
examples["original_pixel_values"] = original_images
examples["edited_pixel_values"] = edited_images | 21_3_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | captions = list(examples[edit_prompt_column])
examples["input_ids"] = tokenize_captions(captions)
return examples
```
Finally, in the [training loop](https://github.com/huggingface/diffusers/blob/64603389da01082055a901f2883c4810d1144edb/examples/instruct_pix2pix/train_instruct_pix2pix.py#L730), it starts by encoding the edited images into latent space:
```py
latents = vae.encode(batch["edited_pixel_values"].to(weight_dtype)).latent_dist.sample()
latents = latents * vae.config.scaling_factor
``` | 21_3_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | latents = latents * vae.config.scaling_factor
```
Then, the script applies dropout to the original image and edit instruction embeddings to support CFG. This is what enables the model to modulate the influence of the edit instruction and original image on the edited image.
```py
encoder_hidden_states = text_encoder(batch["input_ids"])[0]
original_image_embeds = vae.encode(batch["original_pixel_values"].to(weight_dtype)).latent_dist.mode() | 21_3_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | if args.conditioning_dropout_prob is not None:
random_p = torch.rand(bsz, device=latents.device, generator=generator)
prompt_mask = random_p < 2 * args.conditioning_dropout_prob
prompt_mask = prompt_mask.reshape(bsz, 1, 1)
null_conditioning = text_encoder(tokenize_captions([""]).to(accelerator.device))[0]
encoder_hidden_states = torch.where(prompt_mask, null_conditioning, encoder_hidden_states) | 21_3_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | image_mask_dtype = original_image_embeds.dtype
image_mask = 1 - (
(random_p >= args.conditioning_dropout_prob).to(image_mask_dtype)
* (random_p < 3 * args.conditioning_dropout_prob).to(image_mask_dtype)
)
image_mask = image_mask.reshape(bsz, 1, 1, 1)
original_image_embeds = image_mask * original_image_embeds
``` | 21_3_11 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#training-script | .md | ```
That's pretty much it! Aside from the differences described here, the rest of the script is very similar to the [Text-to-image](text2image#training-script) training script, so feel free to check it out for more details. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. | 21_3_12 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | Once you're happy with the changes to your script or if you're okay with the default configuration, you're ready to launch the training script! π | 21_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | This guide uses the [fusing/instructpix2pix-1000-samples](https://huggingface.co/datasets/fusing/instructpix2pix-1000-samples) dataset, which is a smaller version of the [original dataset](https://huggingface.co/datasets/timbrooks/instructpix2pix-clip-filtered). You can also create and use your own dataset if you'd like (see the [Create a dataset for training](create_dataset) guide). | 21_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | Set the `MODEL_NAME` environment variable to the name of the model (can be a model id on the Hub or a path to a local model), and the `DATASET_ID` to the name of the dataset on the Hub. The script creates and saves all the components (feature extractor, scheduler, text encoder, UNet, etc.) to a subfolder in your repository.
<Tip>
For better results, try longer training runs with a larger dataset. We've only tested this training script on a smaller-scale dataset.
<br> | 21_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | <br>
To monitor training progress with Weights and Biases, add the `--report_to=wandb` parameter to the training command and specify a validation image with `--val_image_url` and a validation prompt with `--validation_prompt`. This can be really useful for debugging the model.
</Tip>
If youβre training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.
```bash
accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \ | 21_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | ```bash
accelerate launch --mixed_precision="fp16" train_instruct_pix2pix.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_ID \
--enable_xformers_memory_efficient_attention \
--resolution=256 \
--random_flip \
--train_batch_size=4 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--max_train_steps=15000 \
--checkpointing_steps=5000 \
--checkpoints_total_limit=1 \
--learning_rate=5e-05 \
--max_grad_norm=1 \
--lr_warmup_steps=0 \
--conditioning_dropout_prob=0.05 \ | 21_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | --learning_rate=5e-05 \
--max_grad_norm=1 \
--lr_warmup_steps=0 \
--conditioning_dropout_prob=0.05 \
--mixed_precision=fp16 \
--seed=42 \
--push_to_hub
```
After training is finished, you can use your new InstructPix2Pix for inference:
```py
import PIL
import requests
import torch
from diffusers import StableDiffusionInstructPix2PixPipeline
from diffusers.utils import load_image | 21_4_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("your_cool_model", torch_dtype=torch.float16).to("cuda")
generator = torch.Generator("cuda").manual_seed(0)
image = load_image("https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/test_pix2pix_4.png")
prompt = "add some ducks to the lake"
num_inference_steps = 20
image_guidance_scale = 1.5
guidance_scale = 10 | 21_4_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | edited_image = pipeline(
prompt,
image=image,
num_inference_steps=num_inference_steps,
image_guidance_scale=image_guidance_scale,
guidance_scale=guidance_scale,
generator=generator,
).images[0]
edited_image.save("edited_image.png")
``` | 21_4_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#launch-the-script | .md | guidance_scale=guidance_scale,
generator=generator,
).images[0]
edited_image.save("edited_image.png")
```
You should experiment with different `num_inference_steps`, `image_guidance_scale`, and `guidance_scale` values to see how they affect inference speed and quality. The guidance scale parameters are especially impactful because they control how much the original image and edit instructions affect the edited image. | 21_4_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#stable-diffusion-xl | .md | Stable Diffusion XL (SDXL) is a powerful text-to-image model that generates high-resolution images, and it adds a second text-encoder to its architecture. Use the [`train_instruct_pix2pix_sdxl.py`](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix_sdxl.py) script to train a SDXL model to follow image editing instructions.
The SDXL training script is discussed in more detail in the [SDXL training](sdxl) guide. | 21_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/instructpix2pix.md | https://huggingface.co/docs/diffusers/en/training/instructpix2pix/#next-steps | .md | Congratulations on training your own InstructPix2Pix model! π₯³ To learn more about the model, it may be helpful to:
- Read the [Instruction-tuning Stable Diffusion with InstructPix2Pix](https://huggingface.co/blog/instruction-tuning-sd) blog post to learn more about some experiments we've done with InstructPix2Pix, dataset preparation, and results for different instructions. | 21_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 22_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 22_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | <Tip warning={true}>
The text-to-image script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset.
</Tip>
Text-to-image models like Stable Diffusion are conditioned to generate images given a text prompt. | 22_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | Training a model can be taxing on your hardware, but if you enable `gradient_checkpointing` and `mixed_precision`, it is possible to train a model on a single 24GB GPU. If you're training with larger batch sizes or want to train faster, it's better to use GPUs with more than 30GB of memory. You can reduce your memory footprint by enabling memory-efficient attention with [xFormers](../optimization/xformers). JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support | 22_1_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | JAX/Flax training is also supported for efficient training on TPUs and GPUs, but it doesn't support gradient checkpointing, gradient accumulation or xFormers. A GPU with at least 30GB of memory or a TPU v3 is recommended for training with Flax. | 22_1_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | This guide will explore the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) training script to help you become familiar with it, and how you can adapt it for your own use-case.
Before running the script, make sure you install the library from source:
```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
``` | 22_1_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | ```bash
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
```
Then navigate to the example folder containing the training script and install the required dependencies for the script you're using:
<hfoptions id="installation">
<hfoption id="PyTorch">
```bash
cd examples/text_to_image
pip install -r requirements.txt
```
</hfoption>
<hfoption id="Flax">
```bash
cd examples/text_to_image
pip install -r requirements_flax.txt
```
</hfoption>
</hfoptions>
<Tip> | 22_1_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | ```bash
cd examples/text_to_image
pip install -r requirements_flax.txt
```
</hfoption>
</hfoptions>
<Tip>
π€ Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the π€ Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more.
</Tip>
Initialize an π€ Accelerate environment:
```bash
accelerate config
``` | 22_1_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | </Tip>
Initialize an π€ Accelerate environment:
```bash
accelerate config
```
To setup a default π€ Accelerate environment without choosing any configurations:
```bash
accelerate config default
```
Or if your environment doesn't support an interactive shell, like a notebook, you can use:
```py
from accelerate.utils import write_basic_config | 22_1_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#text-to-image | .md | write_basic_config()
```
Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. | 22_1_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#script-parameters | .md | <Tip>
The following sections highlight parts of the training script that are important for understanding how to modify it, but it doesn't cover every aspect of the script in detail. If you're interested in learning more, feel free to read through the [script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) and let us know if you have any questions or concerns.
</Tip> | 22_2_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#script-parameters | .md | </Tip>
The training script provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L193) function. This function provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like. | 22_2_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#script-parameters | .md | For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command:
```bash
accelerate launch train_text_to_image.py \
--mixed_precision="fp16"
```
Some basic and important parameters include:
- `--pretrained_model_name_or_path`: the name of the model on the Hub or a local path to the pretrained model
- `--dataset_name`: the name of the dataset on the Hub or a local path to the dataset to train on | 22_2_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#script-parameters | .md | - `--dataset_name`: the name of the dataset on the Hub or a local path to the dataset to train on
- `--image_column`: the name of the image column in the dataset to train on
- `--caption_column`: the name of the text column in the dataset to train on
- `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub | 22_2_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#script-parameters | .md | - `--output_dir`: where to save the trained model
- `--push_to_hub`: whether to push the trained model to the Hub
- `--checkpointing_steps`: frequency of saving a checkpoint as the model trains; this is useful if for some reason training is interrupted, you can continue training from that checkpoint by adding `--resume_from_checkpoint` to your training command | 22_2_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#min-snr-weighting | .md | The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script.
Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:
```bash | 22_3_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#min-snr-weighting | .md | Add the `--snr_gamma` parameter and set it to the recommended value of 5.0:
```bash
accelerate launch train_text_to_image.py \
--snr_gamma=5.0
```
You can compare the loss surfaces for different `snr_gamma` values in this [Weights and Biases](https://wandb.ai/sayakpaul/text2image-finetune-minsnr) report. For smaller datasets, the effects of Min-SNR may not be as obvious compared to larger datasets. | 22_3_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | The dataset preprocessing code and training loop are found in the [`main()`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L490) function. If you need to adapt the training script, this is where you'll need to make your changes. | 22_4_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | The `train_text_to_image` script starts by [loading a scheduler](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L543) and tokenizer. You can choose to use a different scheduler here if you want:
```py
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
tokenizer = CLIPTokenizer.from_pretrained( | 22_4_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | tokenizer = CLIPTokenizer.from_pretrained(
args.pretrained_model_name_or_path, subfolder="tokenizer", revision=args.revision
)
```
Then the script [loads the UNet](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L619) model:
```py
load_model = UNet2DConditionModel.from_pretrained(input_dir, subfolder="unet")
model.register_to_config(**load_model.config) | 22_4_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | model.load_state_dict(load_model.state_dict())
``` | 22_4_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | Next, the text and image columns of the dataset need to be preprocessed. The [`tokenize_captions`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L724) function handles tokenizing the inputs, and the [`train_transforms`](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L742) function specifies the type of transforms to apply to the image. | 22_4_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | function specifies the type of transforms to apply to the image. Both of these functions are bundled into `preprocess_train`: | 22_4_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | ```py
def preprocess_train(examples):
images = [image.convert("RGB") for image in examples[image_column]]
examples["pixel_values"] = [train_transforms(image) for image in images]
examples["input_ids"] = tokenize_captions(examples)
return examples
``` | 22_4_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/8959c5b9dec1c94d6ba482c94a58d2215c5fd026/examples/text_to_image/train_text_to_image.py#L878) handles everything else. It encodes images into latent space, adds noise to the latents, computes the text embeddings to condition on, updates the model parameters, and saves and pushes the model to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and | 22_4_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#training-script | .md | to the Hub. If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. | 22_4_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | Once you've made all your changes or you're okay with the default configuration, you're ready to launch the training script! π
<hfoptions id="training-inference">
<hfoption id="PyTorch"> | 22_5_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | <hfoptions id="training-inference">
<hfoption id="PyTorch">
Let's train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters. Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path). If you're training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command.
<Tip> | 22_5_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | <Tip>
To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment variables to the path of the dataset and where to save the model to.
</Tip>
```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export dataset_name="lambdalabs/naruto-blip-captions" | 22_5_2 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | accelerate launch --mixed_precision="fp16" train_text_to_image.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--use_ema \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--gradient_checkpointing \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--enable_xformers_memory_efficient_attention \
--lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-naruto-model" \
--push_to_hub
``` | 22_5_3 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | --lr_scheduler="constant" --lr_warmup_steps=0 \
--output_dir="sd-naruto-model" \
--push_to_hub
```
</hfoption>
<hfoption id="Flax">
Training with Flax can be faster on TPUs and GPUs thanks to [@duongna211](https://github.com/duongna21). Flax is more efficient on a TPU, but GPU performance is also great.
Set the environment variables `MODEL_NAME` and `dataset_name` to the model and the dataset (either from the Hub or a local path).
<Tip> | 22_5_4 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | <Tip>
To train on a local dataset, set the `TRAIN_DIR` and `OUTPUT_DIR` environment variables to the path of the dataset and where to save the model to.
</Tip>
```bash
export MODEL_NAME="stable-diffusion-v1-5/stable-diffusion-v1-5"
export dataset_name="lambdalabs/naruto-blip-captions" | 22_5_5 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | python train_text_to_image_flax.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$dataset_name \
--resolution=512 --center_crop --random_flip \
--train_batch_size=1 \
--max_train_steps=15000 \
--learning_rate=1e-05 \
--max_grad_norm=1 \
--output_dir="sd-naruto-model" \
--push_to_hub
```
</hfoption>
</hfoptions>
Once training is complete, you can use your newly trained model for inference:
<hfoptions id="training-inference">
<hfoption id="PyTorch">
```py | 22_5_6 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | <hfoptions id="training-inference">
<hfoption id="PyTorch">
```py
from diffusers import StableDiffusionPipeline
import torch | 22_5_7 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | pipeline = StableDiffusionPipeline.from_pretrained("path/to/saved_model", torch_dtype=torch.float16, use_safetensors=True).to("cuda")
image = pipeline(prompt="yoda").images[0]
image.save("yoda-naruto.png")
```
</hfoption>
<hfoption id="Flax">
```py
import jax
import numpy as np
from flax.jax_utils import replicate
from flax.training.common_utils import shard
from diffusers import FlaxStableDiffusionPipeline | 22_5_8 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | pipeline, params = FlaxStableDiffusionPipeline.from_pretrained("path/to/saved_model", dtype=jax.numpy.bfloat16)
prompt = "yoda naruto"
prng_seed = jax.random.PRNGKey(0)
num_inference_steps = 50
num_samples = jax.device_count()
prompt = num_samples * [prompt]
prompt_ids = pipeline.prepare_inputs(prompt)
# shard inputs and rng
params = replicate(params)
prng_seed = jax.random.split(prng_seed, jax.device_count())
prompt_ids = shard(prompt_ids) | 22_5_9 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#launch-the-script | .md | images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
image.save("yoda-naruto.png")
```
</hfoption>
</hfoptions> | 22_5_10 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/text2image.md | https://huggingface.co/docs/diffusers/en/training/text2image/#next-steps | .md | Congratulations on training your own text-to-image model! To learn more about how to use your new model, the following guides may be helpful:
- Learn how to [load LoRA weights](../using-diffusers/loading_adapters#LoRA) for inference if you trained your model with LoRA.
- Learn more about how certain parameters like guidance scale or techniques such as prompt weighting can help you control inference in the [Text-to-image](../using-diffusers/conditional_image_generation) task guide. | 22_6_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md | https://huggingface.co/docs/diffusers/en/training/overview/ | .md | <!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 23_0_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md | https://huggingface.co/docs/diffusers/en/training/overview/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
--> | 23_0_1 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md | https://huggingface.co/docs/diffusers/en/training/overview/#overview | .md | π€ Diffusers provides a collection of training scripts for you to train your own diffusion models. You can find all of our training scripts in [diffusers/examples](https://github.com/huggingface/diffusers/tree/main/examples).
Each training script is:
- **Self-contained**: the training script does not depend on any local files, and all packages required to run the script are installed from the `requirements.txt` file. | 23_1_0 |
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md | https://huggingface.co/docs/diffusers/en/training/overview/#overview | .md | - **Easy-to-tweak**: the training scripts are an example of how to train a diffusion model for a specific task and won't work out-of-the-box for every training scenario. You'll likely need to adapt the training script for your specific use-case. To help you with that, we've fully exposed the data preprocessing code and the training loop so you can modify it for your own use. | 23_1_1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.