Diffusers formation for mochi-1-preview model.

It was create by scripts: https://github.com/huggingface/diffusers/blob/main/scripts/convert_mochi_to_diffusers.py

The model can be directly load from pretrained with mochi branch: https://github.com/huggingface/diffusers/tree/mochi-t2v

You can directly use the zipped file branch in: https://huggingface.co/feizhengcong/mochi-1-preview-diffusers/blob/main/diffusers-mochi.zip

from diffusers import MochiPipeline
from diffusers.utils import export_to_video

pipe = MochiPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
frames = pipe(prompt, 
    num_inference_steps=50, 
    guidance_scale=4.5,
    num_frames=61,
    generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]

export_to_video(frames, "mochi.mp4")

Some generated results:

Pretty thanks for the discussion in https://github.com/huggingface/diffusers/pull/9769

11.04 updation for vae encoder releasing.


license: apache-2.0

Downloads last month
59
Inference API
Unable to determine this model’s pipeline type. Check the docs .