Zero-shot Image-to-Image Translation by Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu.
The abstract of the paper is the following:
Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing.
Resources:
source_embeds
and target_embeds
that let you control the direction of the semantic edits in the final image to be generated. Let’s say,
you wanted to translate from “cat” to “dog”. In this case, the edit direction will be “cat -> dog”. To reflect
this in the pipeline, you simply have to set the embeddings related to the phrases including “cat” to
source_embeds
and “dog” to target_embeds
. Refer to the code example below for more details.source_embeds
and target_embeds
.Pipeline | Tasks | Demo |
---|---|---|
StableDiffusionPix2PixZeroPipeline | Text-Based Image Editing | [🤗 Space] (soon) |
Based on an image generated with the input prompt
import requests
import torch
from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline
def download(embedding_url, local_filepath):
r = requests.get(embedding_url)
with open(local_filepath, "wb") as f:
f.write(r.content)
model_ckpt = "CompVis/stable-diffusion-v1-4"
pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(
model_ckpt, conditions_input_image=False, torch_dtype=torch.float16
)
pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
pipeline.to("cuda")
prompt = "a high resolution painting of a cat in the style of van gough"
src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt"
target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt"
for url in [src_embs_url, target_embs_url]:
download(url, url.split("/")[-1])
src_embeds = torch.load(src_embs_url.split("/")[-1])
target_embeds = torch.load(target_embs_url.split("/")[-1])
images = pipeline(
prompt,
source_embeds=src_embeds,
target_embeds=target_embeds,
num_inference_steps=50,
cross_attention_guidance_amount=0.15,
).images
images[0].save("edited_image_dog.png")
Based on an input image
Coming soon
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddpm.DDPMScheduler, diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor conditions_input_image: bool = False requires_safety_checker: bool = True )
Parameters
CLIPTextModel
) —
Frozen text-encoder. Stable Diffusion uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant.
CLIPTokenizer
) —
Tokenizer of class
CLIPTokenizer.
unet
to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, EulerAncestralDiscreteScheduler, or DDPMScheduler.
StableDiffusionSafetyChecker
) —
Classification module that estimates whether generated images could be considered offensive or harmful.
Please, refer to the model card for details.
CLIPFeatureExtractor
) —
Model that extracts features from generated images to be used as inputs for the safety_checker
.
Pipeline for pixel-levl image editing using Pix2Pix Zero. Based on Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
(
prompt: typing.Union[str, typing.List[str], NoneType] = None
image: typing.Union[torch.FloatTensor, PIL.Image.Image, NoneType] = None
source_embeds: Tensor = None
target_embeds: Tensor = None
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
eta: float = 0.0
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None
latents: typing.Optional[torch.FloatTensor] = None
prompt_embeds: typing.Optional[torch.FloatTensor] = None
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None
cross_attention_guidance_amount: float = 0.1
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: typing.Optional[int] = 1
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None
)
→
StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds
.
instead.
PIL.Image.Image
, optional) —
Image
, or tensor representing an image batch which will be used for conditioning.
torch.Tensor
) —
Source concept embeddings. Generation of the embeddings as per the original
paper. Used in discovering the edit direction.
torch.Tensor
) —
Target concept embeddings. Generation of the embeddings as per the original
paper. Used in discovering the edit direction.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated image.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated image.
int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference.
float
, optional, defaults to 7.5) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality.
str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds
. instead. If not defined, one has to pass negative_prompt_embeds
. instead.
Ignored when not using guidance (i.e., ignored if guidance_scale
is less than 1
).
int
, optional, defaults to 1) —
The number of images to generate per prompt.
float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler, will be ignored for others.
torch.Generator
or List[torch.Generator]
, optional) —
One or a list of torch generator(s)
to make generation deterministic.
torch.FloatTensor
, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated by sampling using the supplied random generator
.
torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt
input argument.
torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt
input
argument.
float
, defaults to 0.1) —
Amount of guidance needed from the reference cross-attention maps.
str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
.
bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple.
Callable
, optional) —
A function that will be called every callback_steps
steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
int
, optional, defaults to 1) —
The frequency at which the callback
function will be called. If not specified, the callback will be
called at every step.
Returns
StableDiffusionPipelineOutput or tuple
StableDiffusionPipelineOutput if return_dict
is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of
bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the
safety_checker`.
Function invoked when calling the pipeline for generation.
Examples:
>>> import requests
>>> import torch
>>> from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline
>>> def download(embedding_url, local_filepath):
... r = requests.get(embedding_url)
... with open(local_filepath, "wb") as f:
... f.write(r.content)
>>> model_ckpt = "CompVis/stable-diffusion-v1-4"
>>> pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(
... model_ckpt, conditions_input_image=False, torch_dtype=torch.float16
... )
>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
>>> pipeline.to("cuda")
>>> prompt = "a high resolution painting of a cat in the style of van gough"
>>> source_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/cat.pt"
>>> target_emb_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/dog.pt"
>>> for url in [source_emb_url, target_emb_url]:
... download(url, url.split("/")[-1])
>>> src_embeds = torch.load(source_emb_url.split("/")[-1])
>>> target_embeds = torch.load(target_emb_url.split("/")[-1])
>>> images = pipeline(
... prompt,
... source_embeds=src_embeds,
... target_embeds=target_embeds,
... num_inference_steps=50,
... cross_attention_guidance_amount=0.15,
... ).images
>>> images[0].save("edited_image_dog.png")
Constructs the edit direction to steer the image generation process semantically.
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
torch.device('meta') and loaded to GPU only when their specific submodule has its
forwardmethod called. Note that offloading happens on a submodule basis. Memory savings are higher than with
enable_model_cpu_offload`, but performance is lower.
Generates caption for a given image.