Datasets:

ArXiv:
diffusers-benchmarking-bot commited on
Commit
13f7e4c
·
verified ·
1 Parent(s): 2266b5f

Upload folder using huggingface_hub

Browse files
main/README.md CHANGED
@@ -88,6 +88,8 @@ PIXART-α Controlnet pipeline | Implementation of the controlnet model for pixar
88
  | FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://huggingface.co/papers/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. | [FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/jychen9811/FaithDiff) | [Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff) |
89
  | Stable Diffusion 3 InstructPix2Pix Pipeline | Implementation of Stable Diffusion 3 InstructPix2Pix Pipeline | [Stable Diffusion 3 InstructPix2Pix Pipeline](#stable-diffusion-3-instructpix2pix-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/BleachNick/SD3_UltraEdit_freeform) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/CaptainZZZ/sd3-instructpix2pix) | [Jiayu Zhang](https://github.com/xduzhangjiayu) and [Haozhe Zhao](https://github.com/HaozheZhao)|
90
  | Flux Kontext multiple images | A modified version of the `FluxKontextPipeline` that supports calling Flux Kontext with multiple reference images.| [Flux Kontext multiple input Pipeline](#flux-kontext-multiple-images) | - | [Net-Mist](https://github.com/Net-Mist) |
 
 
91
  To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
92
 
93
  ```py
 
88
  | FaithDiff Stable Diffusion XL Pipeline | Implementation of [(CVPR 2025) FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolutionUnleashing Diffusion Priors for Faithful Image Super-resolution](https://huggingface.co/papers/2411.18824) - FaithDiff is a faithful image super-resolution method that leverages latent diffusion models by actively adapting the diffusion prior and jointly fine-tuning its components (encoder and diffusion model) with an alignment module to ensure high fidelity and structural consistency. | [FaithDiff Stable Diffusion XL Pipeline](#faithdiff-stable-diffusion-xl-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/jychen9811/FaithDiff) | [Junyang Chen, Jinshan Pan, Jiangxin Dong, IMAG Lab, (Adapted by Eliseu Silva)](https://github.com/JyChen9811/FaithDiff) |
89
  | Stable Diffusion 3 InstructPix2Pix Pipeline | Implementation of Stable Diffusion 3 InstructPix2Pix Pipeline | [Stable Diffusion 3 InstructPix2Pix Pipeline](#stable-diffusion-3-instructpix2pix-pipeline) | [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/BleachNick/SD3_UltraEdit_freeform) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/CaptainZZZ/sd3-instructpix2pix) | [Jiayu Zhang](https://github.com/xduzhangjiayu) and [Haozhe Zhao](https://github.com/HaozheZhao)|
90
  | Flux Kontext multiple images | A modified version of the `FluxKontextPipeline` that supports calling Flux Kontext with multiple reference images.| [Flux Kontext multiple input Pipeline](#flux-kontext-multiple-images) | - | [Net-Mist](https://github.com/Net-Mist) |
91
+
92
+
93
  To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
94
 
95
  ```py
main/pipeline_faithdiff_stable_diffusion_xl.py CHANGED
@@ -1705,6 +1705,12 @@ class FaithDiffStableDiffusionXLPipeline(
1705
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
1706
  processing larger images.
1707
  """
 
 
 
 
 
 
1708
  self.vae.enable_tiling()
1709
  self.unet.denoise_encoder.enable_tiling()
1710
 
@@ -1713,6 +1719,12 @@ class FaithDiffStableDiffusionXLPipeline(
1713
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
1714
  computing decoding in one step.
1715
  """
 
 
 
 
 
 
1716
  self.vae.disable_tiling()
1717
  self.unet.denoise_encoder.disable_tiling()
1718
 
 
1705
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
1706
  processing larger images.
1707
  """
1708
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
1709
+ deprecate(
1710
+ "enable_vae_tiling",
1711
+ "0.40.0",
1712
+ depr_message,
1713
+ )
1714
  self.vae.enable_tiling()
1715
  self.unet.denoise_encoder.enable_tiling()
1716
 
 
1719
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
1720
  computing decoding in one step.
1721
  """
1722
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
1723
+ deprecate(
1724
+ "disable_vae_tiling",
1725
+ "0.40.0",
1726
+ depr_message,
1727
+ )
1728
  self.vae.disable_tiling()
1729
  self.unet.denoise_encoder.disable_tiling()
1730
 
main/pipeline_flux_kontext_multiple_images.py CHANGED
@@ -35,6 +35,7 @@ from diffusers.pipelines.pipeline_utils import DiffusionPipeline
35
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
36
  from diffusers.utils import (
37
  USE_PEFT_BACKEND,
 
38
  is_torch_xla_available,
39
  logging,
40
  replace_example_docstring,
@@ -643,6 +644,12 @@ class FluxKontextPipeline(
643
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
644
  processing larger images.
645
  """
 
 
 
 
 
 
646
  self.vae.enable_tiling()
647
 
648
  # Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.disable_vae_tiling
@@ -651,6 +658,12 @@ class FluxKontextPipeline(
651
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
652
  computing decoding in one step.
653
  """
 
 
 
 
 
 
654
  self.vae.disable_tiling()
655
 
656
  def preprocess_image(self, image: PipelineImageInput, _auto_resize: bool, multiple_of: int) -> torch.Tensor:
 
35
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
36
  from diffusers.utils import (
37
  USE_PEFT_BACKEND,
38
+ deprecate,
39
  is_torch_xla_available,
40
  logging,
41
  replace_example_docstring,
 
644
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
645
  processing larger images.
646
  """
647
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
648
+ deprecate(
649
+ "enable_vae_tiling",
650
+ "0.40.0",
651
+ depr_message,
652
+ )
653
  self.vae.enable_tiling()
654
 
655
  # Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.disable_vae_tiling
 
658
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
659
  computing decoding in one step.
660
  """
661
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
662
+ deprecate(
663
+ "disable_vae_tiling",
664
+ "0.40.0",
665
+ depr_message,
666
+ )
667
  self.vae.disable_tiling()
668
 
669
  def preprocess_image(self, image: PipelineImageInput, _auto_resize: bool, multiple_of: int) -> torch.Tensor:
main/pipeline_flux_rf_inversion.py CHANGED
@@ -30,6 +30,7 @@ from diffusers.pipelines.pipeline_utils import DiffusionPipeline
30
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
31
  from diffusers.utils import (
32
  USE_PEFT_BACKEND,
 
33
  is_torch_xla_available,
34
  logging,
35
  replace_example_docstring,
@@ -526,6 +527,12 @@ class RFInversionFluxPipeline(
526
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
527
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
528
  """
 
 
 
 
 
 
529
  self.vae.enable_slicing()
530
 
531
  def disable_vae_slicing(self):
@@ -533,6 +540,12 @@ class RFInversionFluxPipeline(
533
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
534
  computing decoding in one step.
535
  """
 
 
 
 
 
 
536
  self.vae.disable_slicing()
537
 
538
  def enable_vae_tiling(self):
@@ -541,6 +554,12 @@ class RFInversionFluxPipeline(
541
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
542
  processing larger images.
543
  """
 
 
 
 
 
 
544
  self.vae.enable_tiling()
545
 
546
  def disable_vae_tiling(self):
@@ -548,6 +567,12 @@ class RFInversionFluxPipeline(
548
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
549
  computing decoding in one step.
550
  """
 
 
 
 
 
 
551
  self.vae.disable_tiling()
552
 
553
  def prepare_latents_inversion(
 
30
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
31
  from diffusers.utils import (
32
  USE_PEFT_BACKEND,
33
+ deprecate,
34
  is_torch_xla_available,
35
  logging,
36
  replace_example_docstring,
 
527
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
528
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
529
  """
530
+ depr_message = f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
531
+ deprecate(
532
+ "enable_vae_slicing",
533
+ "0.40.0",
534
+ depr_message,
535
+ )
536
  self.vae.enable_slicing()
537
 
538
  def disable_vae_slicing(self):
 
540
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
541
  computing decoding in one step.
542
  """
543
+ depr_message = f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
544
+ deprecate(
545
+ "disable_vae_slicing",
546
+ "0.40.0",
547
+ depr_message,
548
+ )
549
  self.vae.disable_slicing()
550
 
551
  def enable_vae_tiling(self):
 
554
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
555
  processing larger images.
556
  """
557
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
558
+ deprecate(
559
+ "enable_vae_tiling",
560
+ "0.40.0",
561
+ depr_message,
562
+ )
563
  self.vae.enable_tiling()
564
 
565
  def disable_vae_tiling(self):
 
567
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
568
  computing decoding in one step.
569
  """
570
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
571
+ deprecate(
572
+ "disable_vae_tiling",
573
+ "0.40.0",
574
+ depr_message,
575
+ )
576
  self.vae.disable_tiling()
577
 
578
  def prepare_latents_inversion(
main/pipeline_flux_semantic_guidance.py CHANGED
@@ -35,6 +35,7 @@ from diffusers.pipelines.pipeline_utils import DiffusionPipeline
35
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
36
  from diffusers.utils import (
37
  USE_PEFT_BACKEND,
 
38
  is_torch_xla_available,
39
  logging,
40
  replace_example_docstring,
@@ -702,6 +703,12 @@ class FluxSemanticGuidancePipeline(
702
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
703
  processing larger images.
704
  """
 
 
 
 
 
 
705
  self.vae.enable_tiling()
706
 
707
  # Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.disable_vae_tiling
@@ -710,6 +717,12 @@ class FluxSemanticGuidancePipeline(
710
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
711
  computing decoding in one step.
712
  """
 
 
 
 
 
 
713
  self.vae.disable_tiling()
714
 
715
  # Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.prepare_latents
 
35
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
36
  from diffusers.utils import (
37
  USE_PEFT_BACKEND,
38
+ deprecate,
39
  is_torch_xla_available,
40
  logging,
41
  replace_example_docstring,
 
703
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
704
  processing larger images.
705
  """
706
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
707
+ deprecate(
708
+ "enable_vae_tiling",
709
+ "0.40.0",
710
+ depr_message,
711
+ )
712
  self.vae.enable_tiling()
713
 
714
  # Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.disable_vae_tiling
 
717
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
718
  computing decoding in one step.
719
  """
720
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
721
+ deprecate(
722
+ "disable_vae_tiling",
723
+ "0.40.0",
724
+ depr_message,
725
+ )
726
  self.vae.disable_tiling()
727
 
728
  # Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.prepare_latents
main/pipeline_flux_with_cfg.py CHANGED
@@ -28,6 +28,7 @@ from diffusers.pipelines.pipeline_utils import DiffusionPipeline
28
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
29
  from diffusers.utils import (
30
  USE_PEFT_BACKEND,
 
31
  is_torch_xla_available,
32
  logging,
33
  replace_example_docstring,
@@ -503,6 +504,12 @@ class FluxCFGPipeline(DiffusionPipeline, FluxLoraLoaderMixin, FromSingleFileMixi
503
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
504
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
505
  """
 
 
 
 
 
 
506
  self.vae.enable_slicing()
507
 
508
  def disable_vae_slicing(self):
@@ -510,6 +517,12 @@ class FluxCFGPipeline(DiffusionPipeline, FluxLoraLoaderMixin, FromSingleFileMixi
510
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
511
  computing decoding in one step.
512
  """
 
 
 
 
 
 
513
  self.vae.disable_slicing()
514
 
515
  def enable_vae_tiling(self):
@@ -518,6 +531,12 @@ class FluxCFGPipeline(DiffusionPipeline, FluxLoraLoaderMixin, FromSingleFileMixi
518
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
519
  processing larger images.
520
  """
 
 
 
 
 
 
521
  self.vae.enable_tiling()
522
 
523
  def disable_vae_tiling(self):
@@ -525,6 +544,12 @@ class FluxCFGPipeline(DiffusionPipeline, FluxLoraLoaderMixin, FromSingleFileMixi
525
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
526
  computing decoding in one step.
527
  """
 
 
 
 
 
 
528
  self.vae.disable_tiling()
529
 
530
  def prepare_latents(
 
28
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
29
  from diffusers.utils import (
30
  USE_PEFT_BACKEND,
31
+ deprecate,
32
  is_torch_xla_available,
33
  logging,
34
  replace_example_docstring,
 
504
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
505
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
506
  """
507
+ depr_message = f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
508
+ deprecate(
509
+ "enable_vae_slicing",
510
+ "0.40.0",
511
+ depr_message,
512
+ )
513
  self.vae.enable_slicing()
514
 
515
  def disable_vae_slicing(self):
 
517
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
518
  computing decoding in one step.
519
  """
520
+ depr_message = f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
521
+ deprecate(
522
+ "disable_vae_slicing",
523
+ "0.40.0",
524
+ depr_message,
525
+ )
526
  self.vae.disable_slicing()
527
 
528
  def enable_vae_tiling(self):
 
531
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
532
  processing larger images.
533
  """
534
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
535
+ deprecate(
536
+ "enable_vae_tiling",
537
+ "0.40.0",
538
+ depr_message,
539
+ )
540
  self.vae.enable_tiling()
541
 
542
  def disable_vae_tiling(self):
 
544
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
545
  computing decoding in one step.
546
  """
547
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
548
+ deprecate(
549
+ "disable_vae_tiling",
550
+ "0.40.0",
551
+ depr_message,
552
+ )
553
  self.vae.disable_tiling()
554
 
555
  def prepare_latents(
main/pipeline_stable_diffusion_3_differential_img2img.py CHANGED
@@ -29,11 +29,7 @@ from diffusers.models.transformers import SD3Transformer2DModel
29
  from diffusers.pipelines.pipeline_utils import DiffusionPipeline
30
  from diffusers.pipelines.stable_diffusion_3.pipeline_output import StableDiffusion3PipelineOutput
31
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
32
- from diffusers.utils import (
33
- is_torch_xla_available,
34
- logging,
35
- replace_example_docstring,
36
- )
37
  from diffusers.utils.torch_utils import randn_tensor
38
 
39
 
 
29
  from diffusers.pipelines.pipeline_utils import DiffusionPipeline
30
  from diffusers.pipelines.stable_diffusion_3.pipeline_output import StableDiffusion3PipelineOutput
31
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
32
+ from diffusers.utils import is_torch_xla_available, logging, replace_example_docstring
 
 
 
 
33
  from diffusers.utils.torch_utils import randn_tensor
34
 
35
 
main/pipeline_stable_diffusion_boxdiff.py CHANGED
@@ -504,6 +504,12 @@ class StableDiffusionBoxDiffPipeline(
504
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
505
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
506
  """
 
 
 
 
 
 
507
  self.vae.enable_slicing()
508
 
509
  def disable_vae_slicing(self):
@@ -511,6 +517,12 @@ class StableDiffusionBoxDiffPipeline(
511
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
512
  computing decoding in one step.
513
  """
 
 
 
 
 
 
514
  self.vae.disable_slicing()
515
 
516
  def enable_vae_tiling(self):
@@ -519,6 +531,12 @@ class StableDiffusionBoxDiffPipeline(
519
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
520
  processing larger images.
521
  """
 
 
 
 
 
 
522
  self.vae.enable_tiling()
523
 
524
  def disable_vae_tiling(self):
@@ -526,6 +544,12 @@ class StableDiffusionBoxDiffPipeline(
526
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
527
  computing decoding in one step.
528
  """
 
 
 
 
 
 
529
  self.vae.disable_tiling()
530
 
531
  def _encode_prompt(
 
504
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
505
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
506
  """
507
+ depr_message = f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
508
+ deprecate(
509
+ "enable_vae_slicing",
510
+ "0.40.0",
511
+ depr_message,
512
+ )
513
  self.vae.enable_slicing()
514
 
515
  def disable_vae_slicing(self):
 
517
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
518
  computing decoding in one step.
519
  """
520
+ depr_message = f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
521
+ deprecate(
522
+ "disable_vae_slicing",
523
+ "0.40.0",
524
+ depr_message,
525
+ )
526
  self.vae.disable_slicing()
527
 
528
  def enable_vae_tiling(self):
 
531
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
532
  processing larger images.
533
  """
534
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
535
+ deprecate(
536
+ "enable_vae_tiling",
537
+ "0.40.0",
538
+ depr_message,
539
+ )
540
  self.vae.enable_tiling()
541
 
542
  def disable_vae_tiling(self):
 
544
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
545
  computing decoding in one step.
546
  """
547
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
548
+ deprecate(
549
+ "disable_vae_tiling",
550
+ "0.40.0",
551
+ depr_message,
552
+ )
553
  self.vae.disable_tiling()
554
 
555
  def _encode_prompt(
main/pipeline_stable_diffusion_pag.py CHANGED
@@ -471,6 +471,12 @@ class StableDiffusionPAGPipeline(
471
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
472
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
473
  """
 
 
 
 
 
 
474
  self.vae.enable_slicing()
475
 
476
  def disable_vae_slicing(self):
@@ -478,6 +484,12 @@ class StableDiffusionPAGPipeline(
478
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
479
  computing decoding in one step.
480
  """
 
 
 
 
 
 
481
  self.vae.disable_slicing()
482
 
483
  def enable_vae_tiling(self):
@@ -486,6 +498,12 @@ class StableDiffusionPAGPipeline(
486
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
487
  processing larger images.
488
  """
 
 
 
 
 
 
489
  self.vae.enable_tiling()
490
 
491
  def disable_vae_tiling(self):
@@ -493,6 +511,12 @@ class StableDiffusionPAGPipeline(
493
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
494
  computing decoding in one step.
495
  """
 
 
 
 
 
 
496
  self.vae.disable_tiling()
497
 
498
  def _encode_prompt(
 
471
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
472
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
473
  """
474
+ depr_message = f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
475
+ deprecate(
476
+ "enable_vae_slicing",
477
+ "0.40.0",
478
+ depr_message,
479
+ )
480
  self.vae.enable_slicing()
481
 
482
  def disable_vae_slicing(self):
 
484
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
485
  computing decoding in one step.
486
  """
487
+ depr_message = f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
488
+ deprecate(
489
+ "disable_vae_slicing",
490
+ "0.40.0",
491
+ depr_message,
492
+ )
493
  self.vae.disable_slicing()
494
 
495
  def enable_vae_tiling(self):
 
498
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
499
  processing larger images.
500
  """
501
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
502
+ deprecate(
503
+ "enable_vae_tiling",
504
+ "0.40.0",
505
+ depr_message,
506
+ )
507
  self.vae.enable_tiling()
508
 
509
  def disable_vae_tiling(self):
 
511
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
512
  computing decoding in one step.
513
  """
514
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
515
+ deprecate(
516
+ "disable_vae_tiling",
517
+ "0.40.0",
518
+ depr_message,
519
+ )
520
  self.vae.disable_tiling()
521
 
522
  def _encode_prompt(
main/pipeline_stg_hunyuan_video.py CHANGED
@@ -26,7 +26,7 @@ from diffusers.models import AutoencoderKLHunyuanVideo, HunyuanVideoTransformer3
26
  from diffusers.pipelines.hunyuan_video.pipeline_output import HunyuanVideoPipelineOutput
27
  from diffusers.pipelines.pipeline_utils import DiffusionPipeline
28
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
29
- from diffusers.utils import is_torch_xla_available, logging, replace_example_docstring
30
  from diffusers.utils.torch_utils import randn_tensor
31
  from diffusers.video_processor import VideoProcessor
32
 
@@ -481,6 +481,12 @@ class HunyuanVideoSTGPipeline(DiffusionPipeline, HunyuanVideoLoraLoaderMixin):
481
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
482
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
483
  """
 
 
 
 
 
 
484
  self.vae.enable_slicing()
485
 
486
  def disable_vae_slicing(self):
@@ -488,6 +494,12 @@ class HunyuanVideoSTGPipeline(DiffusionPipeline, HunyuanVideoLoraLoaderMixin):
488
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
489
  computing decoding in one step.
490
  """
 
 
 
 
 
 
491
  self.vae.disable_slicing()
492
 
493
  def enable_vae_tiling(self):
@@ -496,6 +508,12 @@ class HunyuanVideoSTGPipeline(DiffusionPipeline, HunyuanVideoLoraLoaderMixin):
496
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
497
  processing larger images.
498
  """
 
 
 
 
 
 
499
  self.vae.enable_tiling()
500
 
501
  def disable_vae_tiling(self):
@@ -503,6 +521,12 @@ class HunyuanVideoSTGPipeline(DiffusionPipeline, HunyuanVideoLoraLoaderMixin):
503
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
504
  computing decoding in one step.
505
  """
 
 
 
 
 
 
506
  self.vae.disable_tiling()
507
 
508
  @property
 
26
  from diffusers.pipelines.hunyuan_video.pipeline_output import HunyuanVideoPipelineOutput
27
  from diffusers.pipelines.pipeline_utils import DiffusionPipeline
28
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
29
+ from diffusers.utils import deprecate, is_torch_xla_available, logging, replace_example_docstring
30
  from diffusers.utils.torch_utils import randn_tensor
31
  from diffusers.video_processor import VideoProcessor
32
 
 
481
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
482
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
483
  """
484
+ depr_message = f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
485
+ deprecate(
486
+ "enable_vae_slicing",
487
+ "0.40.0",
488
+ depr_message,
489
+ )
490
  self.vae.enable_slicing()
491
 
492
  def disable_vae_slicing(self):
 
494
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
495
  computing decoding in one step.
496
  """
497
+ depr_message = f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
498
+ deprecate(
499
+ "disable_vae_slicing",
500
+ "0.40.0",
501
+ depr_message,
502
+ )
503
  self.vae.disable_slicing()
504
 
505
  def enable_vae_tiling(self):
 
508
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
509
  processing larger images.
510
  """
511
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
512
+ deprecate(
513
+ "enable_vae_tiling",
514
+ "0.40.0",
515
+ depr_message,
516
+ )
517
  self.vae.enable_tiling()
518
 
519
  def disable_vae_tiling(self):
 
521
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
522
  computing decoding in one step.
523
  """
524
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
525
+ deprecate(
526
+ "disable_vae_tiling",
527
+ "0.40.0",
528
+ depr_message,
529
+ )
530
  self.vae.disable_tiling()
531
 
532
  @property
main/pipeline_stg_mochi.py CHANGED
@@ -26,11 +26,7 @@ from diffusers.models import AutoencoderKLMochi, MochiTransformer3DModel
26
  from diffusers.pipelines.mochi.pipeline_output import MochiPipelineOutput
27
  from diffusers.pipelines.pipeline_utils import DiffusionPipeline
28
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
29
- from diffusers.utils import (
30
- is_torch_xla_available,
31
- logging,
32
- replace_example_docstring,
33
- )
34
  from diffusers.utils.torch_utils import randn_tensor
35
  from diffusers.video_processor import VideoProcessor
36
 
@@ -458,6 +454,12 @@ class MochiSTGPipeline(DiffusionPipeline, Mochi1LoraLoaderMixin):
458
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
459
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
460
  """
 
 
 
 
 
 
461
  self.vae.enable_slicing()
462
 
463
  def disable_vae_slicing(self):
@@ -465,6 +467,12 @@ class MochiSTGPipeline(DiffusionPipeline, Mochi1LoraLoaderMixin):
465
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
466
  computing decoding in one step.
467
  """
 
 
 
 
 
 
468
  self.vae.disable_slicing()
469
 
470
  def enable_vae_tiling(self):
@@ -473,6 +481,12 @@ class MochiSTGPipeline(DiffusionPipeline, Mochi1LoraLoaderMixin):
473
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
474
  processing larger images.
475
  """
 
 
 
 
 
 
476
  self.vae.enable_tiling()
477
 
478
  def disable_vae_tiling(self):
@@ -480,6 +494,12 @@ class MochiSTGPipeline(DiffusionPipeline, Mochi1LoraLoaderMixin):
480
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
481
  computing decoding in one step.
482
  """
 
 
 
 
 
 
483
  self.vae.disable_tiling()
484
 
485
  def prepare_latents(
 
26
  from diffusers.pipelines.mochi.pipeline_output import MochiPipelineOutput
27
  from diffusers.pipelines.pipeline_utils import DiffusionPipeline
28
  from diffusers.schedulers import FlowMatchEulerDiscreteScheduler
29
+ from diffusers.utils import deprecate, is_torch_xla_available, logging, replace_example_docstring
 
 
 
 
30
  from diffusers.utils.torch_utils import randn_tensor
31
  from diffusers.video_processor import VideoProcessor
32
 
 
454
  Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
455
  compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
456
  """
457
+ depr_message = f"Calling `enable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_slicing()`."
458
+ deprecate(
459
+ "enable_vae_slicing",
460
+ "0.40.0",
461
+ depr_message,
462
+ )
463
  self.vae.enable_slicing()
464
 
465
  def disable_vae_slicing(self):
 
467
  Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
468
  computing decoding in one step.
469
  """
470
+ depr_message = f"Calling `disable_vae_slicing()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_slicing()`."
471
+ deprecate(
472
+ "disable_vae_slicing",
473
+ "0.40.0",
474
+ depr_message,
475
+ )
476
  self.vae.disable_slicing()
477
 
478
  def enable_vae_tiling(self):
 
481
  compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
482
  processing larger images.
483
  """
484
+ depr_message = f"Calling `enable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.enable_tiling()`."
485
+ deprecate(
486
+ "enable_vae_tiling",
487
+ "0.40.0",
488
+ depr_message,
489
+ )
490
  self.vae.enable_tiling()
491
 
492
  def disable_vae_tiling(self):
 
494
  Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
495
  computing decoding in one step.
496
  """
497
+ depr_message = f"Calling `disable_vae_tiling()` on a `{self.__class__.__name__}` is deprecated and this method will be removed in a future version. Please use `pipe.vae.disable_tiling()`."
498
+ deprecate(
499
+ "disable_vae_tiling",
500
+ "0.40.0",
501
+ depr_message,
502
+ )
503
  self.vae.disable_tiling()
504
 
505
  def prepare_latents(