--- license: apache-2.0 language: - en library_name: diffusers --- <p align="center"> <img src="https://huggingface.co/rhymes-ai/Allegro/resolve/main/Rgif.gif" width="500" height="400"/> </p> <p align="center"> <a href="https://rhymes.ai/" target="_blank"> Gallery</a> · <a href="https://github.com/rhymes-ai/Aria" target="_blank">GitHub</a> · <a href="https://www.rhymes.ai/blog-details/" target="_blank">Blog</a> · <a href="https://arxiv.org/pdf/2410.05993" target="_blank">Paper</a> · <a href="https://discord" target="_blank">Discord</a> </p> # Gallery <img src="https://huggingface.co/rhymes-ai/Allegro/resolve/main/gallery.gif" width="1000" height="800"/>For more demos and corresponding prompts, see the [Allegro Gallery](TBD). # Key Feature Allegro is capable of producing high-quality, 6-second videos at 30 frames per second and 720p resolution from simple text prompts. # Model info <table> <tr> <th>Model</th> <td>Allegro</td> </tr> <tr> <th>Description</th> <td>Text-to-Video Diffusion Transformer</td> </tr> <tr> <th>Download</th> <td><HF link - TBD></td> </tr> <tr> <th rowspan="2">Parameter</th> <td>VAE: 175M</td> </tr> <tr> <td>DiT: 2.8B</td> </tr> <tr> <th rowspan="2">Inference Precision</th> <td>VAE: FP32/TF32/BF16/FP16 (best in FP32/TF32)</td> </tr> <tr> <td>DiT/T5: BF16/FP32/TF32</td> </tr> <tr> <th>Context Length</th> <td>79.2k</td> </tr> <tr> <th>Resolution</th> <td>720 x 1280</td> </tr> <tr> <th>Frames</th> <td>88</td> </tr> <tr> <th>Video Length</th> <td>6 seconds @ 15 fps</td> </tr> <tr> <th>Single GPU Memory Usage</th> <td>9.3G BF16 (with cpu_offload)</td> </tr> </table> # Quick start You can quickly get started with Allegro using the Hugging Face Diffusers library. For more tutorials, see Allegro GitHub (link-tbd). Install necessary requirements: ```python pip install diffusers transformers imageio ``` Inference on single gpu: ```python from diffusers import DiffusionPipeline import torch allegro_pipeline = DiffusionPipeline.from_pretrained( "rhythms-ai/allegro", trust_remote_code=True, torch_dtype=torch.bfloat16 ).to("cuda") allegro_pipeline.vae = allegro_pipeline.vae.to(torch.float32) prompt = "a video of an astronaut riding a horse on mars" positive_prompt = """ (masterpiece), (best quality), (ultra-detailed), (unwatermarked), {} emotional, harmonious, vignette, 4k epic detailed, shot on kodak, 35mm photo, sharp focus, high budget, cinemascope, moody, epic, gorgeous """ negative_prompt = """ nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry. """ num_sampling_steps, guidance_scale, seed = 100, 7.5, 42 user_prompt = positive_prompt.format(args.user_prompt.lower().strip()) out_video = allegro_pipeline( user_prompt, negative_prompt=negative_prompt, num_frames=88, height=720, width=1280, num_inference_steps=num_sampling_steps, guidance_scale=guidance_scale, max_sequence_length=512, generator = torch.Generator(device="cuda:0").manual_seed(seed) ).video[0] imageio.mimwrite("test_video.mp4", out_video, fps=15, quality=8) ``` # License This repo is released under the Apache 2.0 License.