Text-to-Video
Diffusers
Safetensors
WanPipeline

how to set offload_model True with diffusers

#5
by zhaooooooo - opened

A100-SXM-80GB
720*1280
error:OOM
error happens below:
video = self.vae.decode(latents, return_dict=False)[0]

you can do this

pipe = WanPipeline.from_pretrained("Wan-AI/Wan2.2-T2V-A14B-Diffusers", vae=vae, torch_dtype=dtype)
pipe.enable_model_cpu_offload(device)

Sign up or log in to comment