| RePaint scheduler | |
| Overview | |
| DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. | |
| Intended for use with RePaintPipeline. | |
| Based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models | |
| and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint | |
| RePaintScheduler | |
| class diffusers.RePaintScheduler | |
| < | |
| source | |
| > | |
| ( | |
| num_train_timesteps: int = 1000 | |
| beta_start: float = 0.0001 | |
| beta_end: float = 0.02 | |
| beta_schedule: str = 'linear' | |
| eta: float = 0.0 | |
| trained_betas: typing.Optional[numpy.ndarray] = None | |
| clip_sample: bool = True | |
| ) | |
| Parameters | |
| num_train_timesteps (int) β number of diffusion steps used to train the model. | |
| beta_start (float) β the starting beta value of inference. | |
| beta_end (float) β the final beta value. | |
| beta_schedule (str) β | |
| the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from | |
| linear, scaled_linear, or squaredcos_cap_v2. | |
| eta (float) β | |
| The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and | |
| 1.0 is DDPM scheduler respectively. | |
| trained_betas (np.ndarray, optional) β | |
| option to pass an array of betas directly to the constructor to bypass beta_start, beta_end etc. | |
| variance_type (str) β | |
| options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small, | |
| fixed_small_log, fixed_large, fixed_large_log, learned or learned_range. | |
| clip_sample (bool, default True) β | |
| option to clip predicted sample between -1 and 1 for numerical stability. | |
| RePaint is a schedule for DDPM inpainting inside a given mask. | |
| ~ConfigMixin takes care of storing all config attributes that are passed in the schedulerβs __init__ | |
| function, such as num_train_timesteps. They can be accessed via scheduler.config.num_train_timesteps. | |
| SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and | |
| from_pretrained() functions. | |
| For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf | |
| scale_model_input | |
| < | |
| source | |
| > | |
| ( | |
| sample: FloatTensor | |
| timestep: typing.Optional[int] = None | |
| ) | |
| β | |
| torch.FloatTensor | |
| Parameters | |
| sample (torch.FloatTensor) β input sample | |
| timestep (int, optional) β current timestep | |
| Returns | |
| torch.FloatTensor | |
| scaled input sample | |
| Ensures interchangeability with schedulers that need to scale the denoising model input depending on the | |
| current timestep. | |
| step | |
| < | |
| source | |
| > | |
| ( | |
| model_output: FloatTensor | |
| timestep: int | |
| sample: FloatTensor | |
| original_image: FloatTensor | |
| mask: FloatTensor | |
| generator: typing.Optional[torch._C.Generator] = None | |
| return_dict: bool = True | |
| ) | |
| β | |
| ~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple | |
| Parameters | |
| model_output (torch.FloatTensor) β direct output from learned | |
| diffusion model. | |
| timestep (int) β current discrete timestep in the diffusion chain. | |
| sample (torch.FloatTensor) β | |
| current instance of sample being created by diffusion process. | |
| original_image (torch.FloatTensor) β | |
| the original image to inpaint on. | |
| mask (torch.FloatTensor) β | |
| the mask where 0.0 values define which part of the original image to inpaint (change). | |
| generator (torch.Generator, optional) β random number generator. | |
| return_dict (bool) β option for returning tuple rather than | |
| DDPMSchedulerOutput class | |
| Returns | |
| ~schedulers.scheduling_utils.RePaintSchedulerOutput or tuple | |
| ~schedulers.scheduling_utils.RePaintSchedulerOutput if return_dict is True, otherwise a tuple. When | |
| returning a tuple, the first element is the sample tensor. | |
| Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion | |
| process from the learned model outputs (most often the predicted noise). | |