Converted to bfloat16 from rain1011/pyramid-flow-sd3. Use the text encoders and tokenizers from that repo (or from SD3), no point reuploading them over and over unchanged.

Inference code is available here: github.com/jy0205/Pyramid-Flow.

Both 384p and 768p work on 24 GB VRAM. For 16 steps (5 second video), 384p takes a little over a minute on a 3090, and 768p takes about 7 minutes. For 31 steps (10 second video), 384p took about 10 minutes.

I highly recommend using cpu_offloading=True when generating, unless you have more than 24 GB VRAM.

Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support diffusers models with pipeline type text-to-video

Model tree for SeanScripts/pyramid-flow-sd3-bf16

Finetuned
(2)
this model