Papers
arxiv:2506.19852

Radial Attention: O(nlog n) Sparse Attention with Energy Decay for Long Video Generation

Published on Jun 24
· Submitted by Lmxyy on Jul 2
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

Radial Attention, a scalable sparse attention mechanism, improves efficiency and preserves video quality in diffusion models by leveraging spatiotemporal energy decay.

AI-generated summary

Recent advances in diffusion models have enabled high-quality video generation, but the additional temporal dimension significantly increases computational costs, making training and inference on long videos prohibitively expensive. In this paper, we identify a phenomenon we term Spatiotemporal Energy Decay in video diffusion models: post-softmax attention scores diminish as spatial and temporal distance between tokens increase, akin to the physical decay of signal or waves over space and time in nature. Motivated by this, we propose Radial Attention, a scalable sparse attention mechanism with O(n log n) complexity that translates energy decay into exponentially decaying compute density, which is significantly more efficient than standard O(n^2) dense attention and more expressive than linear attention. Specifically, Radial Attention employs a simple, static attention mask where each token attends to spatially nearby tokens, with the attention window size shrinking with temporal distance. Moreover, it allows pre-trained video diffusion models to extend their generation length with efficient LoRA-based fine-tuning. Extensive experiments show that Radial Attention maintains video quality across Wan2.1-14B, HunyuanVideo, and Mochi 1, achieving up to a 1.9times speedup over the original dense attention. With minimal tuning, it enables video generation up to 4times longer while reducing training costs by up to 4.4times compared to direct fine-tuning and accelerating inference by up to 3.7times compared to dense attention inference.

Community

Paper author Paper submitter

We introduce Radial Attention, a sparse attention mechanism with O(nlog⁡n) computational complexity for long video generation.

🔍 Key Features:
✅ Plug-and-play: works with pre-trained models like Wan, HunyuanVideo, Mochi
✅ Speeds up both training & inference by 2–4×, without quality loss

All you need is a pre-defined static attention mask!
ComfyUI integration is in progress and will be released in ComfyUI-nunchaku!

Paper: https://arxiv.org/abs/2506.19852
Code: https://github.com/mit-han-lab/radial-attention
Website: https://hanlab.mit.edu/projects/radial-attention

Excellent work!

MIT HAN Lab delivers once again

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.19852 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.19852 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.19852 in a Space README.md to link it from this page.

Collections including this paper 6