source
stringclasses
273 values
url
stringlengths
47
172
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
- **Beginner-friendly**: the training scripts are designed to be beginner-friendly and easy to understand, rather than including the latest state-of-the-art methods to get the best and most competitive results. Any training methods we consider too complex are purposefully left out. - **Single-purpose**: each training script is expressly designed for only one task to keep it readable and understandable. Our current collection of training scripts include:
23_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
Our current collection of training scripts include: | Training | SDXL-support | LoRA-support | Flax-support | |---|---|---|---| | [unconditional image generation](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) | | | |
23_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
| [text-to-image](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) | πŸ‘ | πŸ‘ | πŸ‘ | | [textual inversion](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) | | | πŸ‘ |
23_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
| [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb) | πŸ‘ | πŸ‘ | πŸ‘ | | [ControlNet](https://github.com/huggingface/diffusers/tree/main/examples/controlnet) | πŸ‘ | | πŸ‘ | | [InstructPix2Pix](https://github.com/huggingface/diffusers/tree/main/examples/instruct_pix2pix) | πŸ‘ | | |
23_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
| [InstructPix2Pix](https://github.com/huggingface/diffusers/tree/main/examples/instruct_pix2pix) | πŸ‘ | | | | [Custom Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion) | | | | | [T2I-Adapters](https://github.com/huggingface/diffusers/tree/main/examples/t2i_adapter) | πŸ‘ | | | | [Kandinsky 2.2](https://github.com/huggingface/diffusers/tree/main/examples/kandinsky2_2/text_to_image) | | πŸ‘ | |
23_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
| [Kandinsky 2.2](https://github.com/huggingface/diffusers/tree/main/examples/kandinsky2_2/text_to_image) | | πŸ‘ | | | [Wuerstchen](https://github.com/huggingface/diffusers/tree/main/examples/wuerstchen/text_to_image) | | πŸ‘ | |
23_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#overview
.md
These examples are **actively** maintained, so please feel free to open an issue if they aren't working as expected. If you feel like another training example should be included, you're more than welcome to start a [Feature Request](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=) to discuss your feature idea with us and whether it meets our criteria of being self-contained, easy-to-tweak, beginner-friendly, and single-purpose.
23_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#install
.md
Make sure you can successfully run the latest versions of the example scripts by installing the library from source in a new virtual environment: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ```
23_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#install
.md
```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the folder of the training script (for example, [DreamBooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)) and install the `requirements.txt` file. Some training scripts have a specific requirement file for SDXL, LoRA or Flax. If you're using one of these scripts, make sure you install its corresponding requirements file. ```bash cd examples/dreambooth
23_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/overview.md
https://huggingface.co/docs/diffusers/en/training/overview/#install
.md
```bash cd examples/dreambooth pip install -r requirements.txt # to train SDXL with DreamBooth pip install -r requirements_sdxl.txt ``` To speedup training and reduce memory-usage, we recommend: - using PyTorch 2.0 or higher to automatically use [scaled dot product attention](../optimization/torch2.0#scaled-dot-product-attention) during training (you don't need to make any changes to the training code) - installing [xFormers](../optimization/xformers) to enable memory-efficient attention
23_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
24_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
24_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#distributed-inference
.md
On distributed setups, you can run inference across multiple GPUs with πŸ€— [Accelerate](https://huggingface.co/docs/accelerate/index) or [PyTorch Distributed](https://pytorch.org/tutorials/beginner/dist_overview.html), which is useful for generating with multiple prompts in parallel. This guide will show you how to use πŸ€— Accelerate and PyTorch Distributed for distributed inference.
24_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#-accelerate
.md
πŸ€— [Accelerate](https://huggingface.co/docs/accelerate/index) is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code.
24_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#-accelerate
.md
To begin, create a Python file and initialize an [`accelerate.PartialState`] to create a distributed environment; your setup is automatically detected so you don't need to explicitly define the `rank` or `world_size`. Move the [`DiffusionPipeline`] to `distributed_state.device` to assign a GPU to each process. Now use the [`~accelerate.PartialState.split_between_processes`] utility as a context manager to automatically distribute the prompts between the number of processes. ```py import torch
24_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#-accelerate
.md
```py import torch from accelerate import PartialState from diffusers import DiffusionPipeline
24_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#-accelerate
.md
pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True ) distributed_state = PartialState() pipeline.to(distributed_state.device)
24_2_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#-accelerate
.md
with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: result = pipeline(prompt).images[0] result.save(f"result_{distributed_state.process_index}.png") ``` Use the `--num_processes` argument to specify the number of GPUs to use, and call `accelerate launch` to run the script: ```bash accelerate launch run_distributed.py --num_processes=2 ``` <Tip>
24_2_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#-accelerate
.md
```bash accelerate launch run_distributed.py --num_processes=2 ``` <Tip> Refer to this minimal example [script](https://gist.github.com/sayakpaul/cfaebd221820d7b43fae638b4dfa01ba) for running inference across multiple GPUs. To learn more, take a look at the [Distributed Inference with πŸ€— Accelerate](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide. </Tip>
24_2_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
PyTorch supports [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) which enables data parallelism. To start, create a Python file and import `torch.distributed` and `torch.multiprocessing` to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a [`DiffusionPipeline`]: ```py import torch import torch.distributed as dist import torch.multiprocessing as mp
24_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
from diffusers import DiffusionPipeline
24_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
sd = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True ) ```
24_3_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
) ``` You'll want to create a function to run inference; [`init_process_group`](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group) handles creating a distributed environment with the type of backend to use, the `rank` of the current process, and the `world_size` or the number of processes participating. If you're running inference in parallel over 2 GPUs, then the `world_size` is 2.
24_3_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
Move the [`DiffusionPipeline`] to `rank` and use `get_rank` to assign a GPU to each process, where each process handles a different prompt: ```py def run_inference(rank, world_size): dist.init_process_group("nccl", rank=rank, world_size=world_size)
24_3_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
sd.to(rank) if torch.distributed.get_rank() == 0: prompt = "a dog" elif torch.distributed.get_rank() == 1: prompt = "a cat"
24_3_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
image = sd(prompt).images[0] image.save(f"./{'_'.join(prompt)}.png") ``` To run the distributed inference, call [`mp.spawn`](https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn) to run the `run_inference` function on the number of GPUs defined in `world_size`: ```py def main(): world_size = 2 mp.spawn(run_inference, args=(world_size,), nprocs=world_size, join=True)
24_3_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#pytorch-distributed
.md
if __name__ == "__main__": main() ``` Once you've completed the inference script, use the `--nproc_per_node` argument to specify the number of GPUs to use and call `torchrun` to run the script: ```bash torchrun run_distributed.py --nproc_per_node=2 ``` > [!TIP] > You can use `device_map` within a [`DiffusionPipeline`] to distribute its model-level components on multiple devices. Refer to the [Device placement](../tutorials/inference_with_big_models#device-placement) guide to learn more.
24_3_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
Modern diffusion systems such as [Flux](../api/pipelines/flux) are very large and have multiple models. For example, [Flux.1-Dev](https://hf.co/black-forest-labs/FLUX.1-dev) is made up of two text encoders - [T5-XXL](https://hf.co/google/t5-v1_1-xxl) and [CLIP-L](https://hf.co/openai/clip-vit-large-patch14) - a [diffusion transformer](../api/models/flux_transformer), and a [VAE](../api/models/autoencoderkl). With a model this size, it can be challenging to run inference on consumer GPUs.
24_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
Model sharding is a technique that distributes models across GPUs when the models don't fit on a single GPU. The example below assumes two 16GB GPUs are available for inference. Start by computing the text embeddings with the text encoders. Keep the text encoders on two GPUs by setting `device_map="balanced"`. The `balanced` strategy evenly distributes the model on all available GPUs. Use the `max_memory` parameter to allocate the maximum amount of memory for each text encoder on each GPU. > [!TIP]
24_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
> [!TIP] > **Only** load the text encoders for this step! The diffusion transformer and VAE are loaded in a later step to preserve memory. ```py from diffusers import FluxPipeline import torch
24_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
prompt = "a photo of a dog with cat-like look"
24_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
pipeline = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", transformer=None, vae=None, device_map="balanced", max_memory={0: "16GB", 1: "16GB"}, torch_dtype=torch.bfloat16 ) with torch.no_grad(): print("Encoding prompts.") prompt_embeds, pooled_prompt_embeds, text_ids = pipeline.encode_prompt( prompt=prompt, prompt_2=None, max_sequence_length=512 ) ``` Once the text embeddings are computed, remove them from the GPU to make space for the diffusion transformer. ```py import gc
24_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
def flush(): gc.collect() torch.cuda.empty_cache() torch.cuda.reset_max_memory_allocated() torch.cuda.reset_peak_memory_stats() del pipeline.text_encoder del pipeline.text_encoder_2 del pipeline.tokenizer del pipeline.tokenizer_2 del pipeline
24_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
flush() ```
24_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
Load the diffusion transformer next which has 12.5B parameters. This time, set `device_map="auto"` to automatically distribute the model across two 16GB GPUs. The `auto` strategy is backed by [Accelerate](https://hf.co/docs/accelerate/index) and available as a part of the [Big Model Inference](https://hf.co/docs/accelerate/concept_guides/big_model_inference) feature. It starts by distributing a model across the fastest device first (GPU) before moving to slower devices like the CPU and hard drive if
24_4_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
by distributing a model across the fastest device first (GPU) before moving to slower devices like the CPU and hard drive if needed. The trade-off of storing model parameters on slower devices is slower inference latency.
24_4_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
```py from diffusers import FluxTransformer2DModel import torch
24_4_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
transformer = FluxTransformer2DModel.from_pretrained( "black-forest-labs/FLUX.1-dev", subfolder="transformer", device_map="auto", torch_dtype=torch.bfloat16 ) ``` > [!TIP] > At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the transformer model is sharded across devices.
24_4_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
Add the transformer model to the pipeline for denoising, but set the other model-level components like the text encoders and VAE to `None` because you don't need them yet. ```py pipeline = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", text_encoder=None, text_encoder_2=None, tokenizer=None, tokenizer_2=None, vae=None, transformer=transformer, torch_dtype=torch.bfloat16 )
24_4_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
print("Running denoising.") height, width = 768, 1360 latents = pipeline( prompt_embeds=prompt_embeds, pooled_prompt_embeds=pooled_prompt_embeds, num_inference_steps=50, guidance_scale=3.5, height=height, width=width, output_type="latent", ).images ``` Remove the pipeline and transformer from memory as they're no longer needed. ```py del pipeline.transformer del pipeline
24_4_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
flush() ``` Finally, decode the latents with the VAE into an image. The VAE is typically small enough to be loaded on a single GPU. ```py from diffusers import AutoencoderKL from diffusers.image_processor import VaeImageProcessor import torch vae = AutoencoderKL.from_pretrained(ckpt_id, subfolder="vae", torch_dtype=torch.bfloat16).to("cuda") vae_scale_factor = 2 ** (len(vae.config.block_out_channels)) image_processor = VaeImageProcessor(vae_scale_factor=vae_scale_factor)
24_4_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
with torch.no_grad(): print("Running decoding.") latents = FluxPipeline._unpack_latents(latents, height, width, vae_scale_factor) latents = (latents / vae.config.scaling_factor) + vae.config.shift_factor
24_4_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/distributed_inference.md
https://huggingface.co/docs/diffusers/en/training/distributed_inference/#model-sharding
.md
image = vae.decode(latents, return_dict=False)[0] image = image_processor.postprocess(image, output_type="pil") image[0].save("split_transformer.png") ``` By selectively loading and unloading the models you need at a given stage and sharding the largest models across multiple GPUs, it is possible to run inference with large models on consumer GPUs.
24_4_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
25_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
25_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
<Tip warning={true}> This script is experimental, and it's easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. </Tip>
25_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes an image prior model for creating image embeddings from text prompts, and a decoder model that generates images based on the prior model's embeddings. That's why you'll find two separate scripts in Diffusers for Kandinsky 2.2, one for training the prior model and one for training the decoder model. You can train both models separately, but to get the best results, you should train both
25_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
one for training the decoder model. You can train both models separately, but to get the best results, you should train both the prior and decoder models.
25_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
Depending on your GPU, you may need to enable `gradient_checkpointing` (⚠️ not supported for the prior model!), `mixed_precision`, and `gradient_accumulation_steps` to help fit the model into memory and to speedup training. You can reduce your memory-usage even more by enabling memory-efficient attention with [xFormers](../optimization/xformers) (version [v0.0.16](https://github.com/huggingface/diffusers/issues/2234#issuecomment-1416931212) fails for training on some GPUs so you may need to install a
25_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
fails for training on some GPUs so you may need to install a development version instead).
25_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
This guide explores the [train_text_to_image_prior.py](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py) and the [train_text_to_image_decoder.py](https://github.com/huggingface/diffusers/blob/main/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py) scripts to help you become more familiar with it, and how you can adapt it for your own use-case. Before running the scripts, make sure you install the library from source:
25_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
Before running the scripts, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the example folder containing the training script and install the required dependencies for the script you're using: ```bash cd examples/kandinsky2_2/text_to_image pip install -r requirements.txt ``` <Tip>
25_1_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
```bash cd examples/kandinsky2_2/text_to_image pip install -r requirements.txt ``` <Tip> πŸ€— Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the πŸ€— Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an πŸ€— Accelerate environment: ```bash accelerate config ```
25_1_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
</Tip> Initialize an πŸ€— Accelerate environment: ```bash accelerate config ``` To setup a default πŸ€— Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
25_1_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip>
25_1_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#kandinsky-22
.md
<Tip> The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn't cover every aspect of the scripts in detail. If you're interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. </Tip>
25_1_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#script-parameters
.md
The training scripts provides many parameters to help you customize your training run. All of the parameters and their descriptions are found in the [`parse_args()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L190) function. The training scripts provides default values for each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command
25_2_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#script-parameters
.md
each parameter, such as the training batch size and learning rate, but you can also set your own values in the training command if you'd like.
25_2_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#script-parameters
.md
For example, to speedup training with mixed precision using the fp16 format, add the `--mixed_precision` parameter to the training command: ```bash accelerate launch train_text_to_image_prior.py \ --mixed_precision="fp16" ``` Most of the parameters are identical to the parameters in the [Text-to-image](text2image#script-parameters) training guide, so let's get straight to a walkthrough of the Kandinsky training scripts!
25_2_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#min-snr-weighting
.md
The [Min-SNR](https://huggingface.co/papers/2303.09556) weighting strategy can help with training by rebalancing the loss to achieve faster convergence. The training script supports predicting `epsilon` (noise) or `v_prediction`, but Min-SNR is compatible with both prediction types. This weighting strategy is only supported by PyTorch and is unavailable in the Flax training script. Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash
25_3_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#min-snr-weighting
.md
Add the `--snr_gamma` parameter and set it to the recommended value of 5.0: ```bash accelerate launch train_text_to_image_prior.py \ --snr_gamma=5.0 ```
25_3_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
The training script is also similar to the [Text-to-image](text2image#training-script) training guide, but it's been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. <hfoptions id="script"> <hfoption id="prior model">
25_4_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
<hfoptions id="script"> <hfoption id="prior model"> The [`main()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L441) function contains the code for preparing the dataset and training the model.
25_4_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
One of the main differences you'll notice right away is that the training script also loads a [`~transformers.CLIPImageProcessor`] - in addition to a scheduler and tokenizer - for preprocessing images and a [`~transformers.CLIPVisionModelWithProjection`] model for encoding the images: ```py noise_scheduler = DDPMScheduler(beta_schedule="squaredcos_cap_v2", prediction_type="sample") image_processor = CLIPImageProcessor.from_pretrained( args.pretrained_prior_model_name_or_path, subfolder="image_processor"
25_4_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
image_processor = CLIPImageProcessor.from_pretrained( args.pretrained_prior_model_name_or_path, subfolder="image_processor" ) tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer")
25_4_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
with ContextManagers(deepspeed_zero_init_disabled_context_manager()): image_encoder = CLIPVisionModelWithProjection.from_pretrained( args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype ).eval() text_encoder = CLIPTextModelWithProjection.from_pretrained( args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype ).eval() ```
25_4_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype ).eval() ``` Kandinsky uses a [`PriorTransformer`] to generate the image embeddings, so you'll want to setup the optimizer to learn the prior mode's parameters. ```py prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") prior.train() optimizer = optimizer_cls( prior.parameters(), lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2),
25_4_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
prior.train() optimizer = optimizer_cls( prior.parameters(), lr=args.learning_rate, betas=(args.adam_beta1, args.adam_beta2), weight_decay=args.adam_weight_decay, eps=args.adam_epsilon, ) ``` Next, the input captions are tokenized, and images are [preprocessed](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L632) by the [`~transformers.CLIPImageProcessor`]: ```py def preprocess_train(examples):
25_4_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
```py def preprocess_train(examples): images = [image.convert("RGB") for image in examples[image_column]] examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) return examples ```
25_4_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) return examples ``` Finally, the [training loop](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_prior.py#L718) converts the input images into latents, adds noise to the image embeddings, and makes a prediction: ```py model_pred = prior( noisy_latents, timestep=timesteps, proj_embedding=prompt_embeds,
25_4_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
```py model_pred = prior( noisy_latents, timestep=timesteps, proj_embedding=prompt_embeds, encoder_hidden_states=text_encoder_hidden_states, attention_mask=text_mask, ).predicted_image_embedding ``` If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. </hfoption> <hfoption id="decoder model">
25_4_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
</hfoption> <hfoption id="decoder model"> The [`main()`](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py#L440) function contains the code for preparing the dataset and training the model. Unlike the prior model, the decoder initializes a [`VQModel`] to decode the latents into images and it uses a [`UNet2DConditionModel`]: ```py with ContextManagers(deepspeed_zero_init_disabled_context_manager()):
25_4_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
```py with ContextManagers(deepspeed_zero_init_disabled_context_manager()): vae = VQModel.from_pretrained( args.pretrained_decoder_model_name_or_path, subfolder="movq", torch_dtype=weight_dtype ).eval() image_encoder = CLIPVisionModelWithProjection.from_pretrained( args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype ).eval() unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet") ```
25_4_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
).eval() unet = UNet2DConditionModel.from_pretrained(args.pretrained_decoder_model_name_or_path, subfolder="unet") ``` Next, the script includes several image transforms and a [preprocessing](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py#L622) function for applying the transforms to the images and returning the pixel values: ```py def preprocess_train(examples):
25_4_12
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
```py def preprocess_train(examples): images = [image.convert("RGB") for image in examples[image_column]] examples["pixel_values"] = [train_transforms(image) for image in images] examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values return examples ```
25_4_13
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values return examples ``` Lastly, the [training loop](https://github.com/huggingface/diffusers/blob/6e68c71503682c8693cb5b06a4da4911dfd655ee/examples/kandinsky2_2/text_to_image/train_text_to_image_decoder.py#L706) handles converting the images to latents, adding noise, and predicting the noise residual.
25_4_14
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#training-script
.md
If you want to learn more about how the training loop works, check out the [Understanding pipelines, models and schedulers](../using-diffusers/write_own_pipeline) tutorial which breaks down the basic pattern of the denoising process. ```py model_pred = unet(noisy_latents, timesteps, None, added_cond_kwargs=added_cond_kwargs).sample[:, :4] ``` </hfoption> </hfoptions>
25_4_15
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
Once you’ve made all your changes or you’re okay with the default configuration, you’re ready to launch the training script! πŸš€
25_5_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
You'll train on the [Naruto BLIP captions](https://huggingface.co/datasets/lambdalabs/naruto-blip-captions) dataset to generate your own Naruto characters, but you can also create and train on your own dataset by following the [Create a dataset for training](create_dataset) guide. Set the environment variable `DATASET_NAME` to the name of the dataset on the Hub or if you're training on your own files, set the environment variable `TRAIN_DIR` to a path to your dataset.
25_5_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
If you’re training on more than one GPU, add the `--multi_gpu` parameter to the `accelerate launch` command. <Tip> To monitor training progress with Weights & Biases, add the `--report_to=wandb` parameter to the training command. You’ll also need to add the `--validation_prompt` to the training command to keep track of results. This can be really useful for debugging the model and viewing intermediate results. </Tip> <hfoptions id="training-inference"> <hfoption id="prior model"> ```bash
25_5_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
</Tip> <hfoptions id="training-inference"> <hfoption id="prior model"> ```bash export DATASET_NAME="lambdalabs/naruto-blip-captions"
25_5_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
accelerate launch --mixed_precision="fp16" train_text_to_image_prior.py \ --dataset_name=$DATASET_NAME \ --resolution=768 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --checkpoints_total_limit=3 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --validation_prompts="A robot naruto, 4k photo" \ --report_to="wandb" \ --push_to_hub \ --output_dir="kandi2-prior-naruto-model" ``` </hfoption> <hfoption id="decoder model">
25_5_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
--push_to_hub \ --output_dir="kandi2-prior-naruto-model" ``` </hfoption> <hfoption id="decoder model"> ```bash export DATASET_NAME="lambdalabs/naruto-blip-captions"
25_5_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
accelerate launch --mixed_precision="fp16" train_text_to_image_decoder.py \ --dataset_name=$DATASET_NAME \ --resolution=768 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --gradient_checkpointing \ --max_train_steps=15000 \ --learning_rate=1e-05 \ --max_grad_norm=1 \ --checkpoints_total_limit=3 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --validation_prompts="A robot naruto, 4k photo" \ --report_to="wandb" \ --push_to_hub \ --output_dir="kandi2-decoder-naruto-model" ``` </hfoption>
25_5_6
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
--report_to="wandb" \ --push_to_hub \ --output_dir="kandi2-decoder-naruto-model" ``` </hfoption> </hfoptions> Once training is finished, you can use your newly trained model for inference! <hfoptions id="training-inference"> <hfoption id="prior model"> ```py from diffusers import AutoPipelineForText2Image, DiffusionPipeline import torch
25_5_7
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
prior_pipeline = DiffusionPipeline.from_pretrained(output_dir, torch_dtype=torch.float16) prior_components = {"prior_" + k: v for k,v in prior_pipeline.components.items()} pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", **prior_components, torch_dtype=torch.float16)
25_5_8
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
pipe.enable_model_cpu_offload() prompt="A robot naruto, 4k photo" image = pipeline(prompt=prompt, negative_prompt=negative_prompt).images[0] ``` <Tip> Feel free to replace `kandinsky-community/kandinsky-2-2-decoder` with your own trained decoder checkpoint! </Tip> </hfoption> <hfoption id="decoder model"> ```py from diffusers import AutoPipelineForText2Image import torch
25_5_9
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
pipeline = AutoPipelineForText2Image.from_pretrained("path/to/saved/model", torch_dtype=torch.float16) pipeline.enable_model_cpu_offload() prompt="A robot naruto, 4k photo" image = pipeline(prompt=prompt).images[0] ``` For the decoder model, you can also perform inference from a saved checkpoint which can be useful for viewing intermediate results. In this case, load the checkpoint into the UNet: ```py from diffusers import AutoPipelineForText2Image, UNet2DConditionModel
25_5_10
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#launch-the-script
.md
unet = UNet2DConditionModel.from_pretrained("path/to/saved/model" + "/checkpoint-<N>/unet") pipeline = AutoPipelineForText2Image.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", unet=unet, torch_dtype=torch.float16) pipeline.enable_model_cpu_offload() image = pipeline(prompt="A robot naruto, 4k photo").images[0] ``` </hfoption> </hfoptions>
25_5_11
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#next-steps
.md
Congratulations on training a Kandinsky 2.2 model! To learn more about how to use your new model, the following guides may be helpful: - Read the [Kandinsky](../using-diffusers/kandinsky) guide to learn how to use it for a variety of different tasks (text-to-image, image-to-image, inpainting, interpolation), and how it can be combined with a ControlNet.
25_6_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/kandinsky.md
https://huggingface.co/docs/diffusers/en/training/kandinsky/#next-steps
.md
- Check out the [DreamBooth](dreambooth) and [LoRA](lora) training guides to learn how to train a personalized Kandinsky model with just a few example images. These two training techniques can even be combined!
25_6_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/
.md
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
26_0_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. -->
26_0_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
The [Wuerstchen](https://hf.co/papers/2306.00637) model drastically reduces computational costs by compressing the latent space by 42x, without compromising image quality and accelerating inference. During training, Wuerstchen uses two models (VQGAN + autoencoder) to compress the latents, and then a third model (text-conditioned latent diffusion model) is conditioned on this highly compressed space to generate an image.
26_1_0
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
To fit the prior model into GPU memory and to speedup training, try enabling `gradient_accumulation_steps`, `gradient_checkpointing`, and `mixed_precision` respectively. This guide explores the [train_text_to_image_prior.py](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_prior.py) script to help you become more familiar with it, and how you can adapt it for your own use-case.
26_1_1
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
Before running the script, make sure you install the library from source: ```bash git clone https://github.com/huggingface/diffusers cd diffusers pip install . ``` Then navigate to the example folder containing the training script and install the required dependencies for the script you're using: ```bash cd examples/wuerstchen/text_to_image pip install -r requirements.txt ``` <Tip>
26_1_2
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
```bash cd examples/wuerstchen/text_to_image pip install -r requirements.txt ``` <Tip> πŸ€— Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It'll automatically configure your training setup based on your hardware and environment. Take a look at the πŸ€— Accelerate [Quick tour](https://huggingface.co/docs/accelerate/quicktour) to learn more. </Tip> Initialize an πŸ€— Accelerate environment: ```bash accelerate config ```
26_1_3
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
</Tip> Initialize an πŸ€— Accelerate environment: ```bash accelerate config ``` To setup a default πŸ€— Accelerate environment without choosing any configurations: ```bash accelerate config default ``` Or if your environment doesn't support an interactive shell, like a notebook, you can use: ```py from accelerate.utils import write_basic_config
26_1_4
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
write_basic_config() ``` Lastly, if you want to train a model on your own dataset, take a look at the [Create a dataset for training](create_dataset) guide to learn how to create a dataset that works with the training script. <Tip>
26_1_5
/Users/nielsrogge/Documents/python_projecten/diffusers/docs/source/en/training/wuerstchen.md
https://huggingface.co/docs/diffusers/en/training/wuerstchen/#wuerstchen
.md
<Tip> The following sections highlight parts of the training scripts that are important for understanding how to modify it, but it doesn't cover every aspect of the [script](https://github.com/huggingface/diffusers/blob/main/examples/wuerstchen/text_to_image/train_text_to_image_prior.py) in detail. If you're interested in learning more, feel free to read through the scripts and let us know if you have any questions or concerns. </Tip>
26_1_6