LayerFlow: A Unified Model for Layer-aware Video Generation
Abstract
LayerFlow is a unified framework for generating layer-aware videos using a text-to-video diffusion transformer and layer embeddings, supporting various video generation tasks with a multi-stage training strategy.
We present LayerFlow, a unified solution for layer-aware video generation. Given per-layer prompts, LayerFlow generates videos for the transparent foreground, clean background, and blended scene. It also supports versatile variants like decomposing a blended video or generating the background for the given foreground and vice versa. Starting from a text-to-video diffusion transformer, we organize the videos for different layers as sub-clips, and leverage layer embeddings to distinguish each clip and the corresponding layer-wise prompts. In this way, we seamlessly support the aforementioned variants in one unified framework. For the lack of high-quality layer-wise training videos, we design a multi-stage training strategy to accommodate static images with high-quality layer annotations. Specifically, we first train the model with low-quality video data. Then, we tune a motion LoRA to make the model compatible with static frames. Afterward, we train the content LoRA on the mixture of image data with high-quality layered images along with copy-pasted video data. During inference, we remove the motion LoRA thus generating smooth videos with desired layers.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MAGREF: Masked Guidance for Any-Reference Video Generation (2025)
- HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation (2025)
- VideoPanda: Video Panoramic Diffusion with Multi-view Attention (2025)
- Many-for-Many: Unify the Training of Multiple Video and Image Generation and Manipulation Tasks (2025)
- OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding (2025)
- Subject-driven Video Generation via Disentangled Identity and Motion (2025)
- ShotAdapter: Text-to-Multi-Shot Video Generation with Diffusion Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper