UNCAGE: Contrastive Attention Guidance for Masked Generative Transformers in Text-to-Image Generation
Abstract
UNCAGE, a training-free method using contrastive attention guidance, enhances compositional fidelity in text-to-image generation by prioritizing the unmasking of object-representing tokens.
Text-to-image (T2I) generation has been actively studied using Diffusion Models and Autoregressive Models. Recently, Masked Generative Transformers have gained attention as an alternative to Autoregressive Models to overcome the inherent limitations of causal attention and autoregressive decoding through bidirectional attention and parallel decoding, enabling efficient and high-quality image generation. However, compositional T2I generation remains challenging, as even state-of-the-art Diffusion Models often fail to accurately bind attributes and achieve proper text-image alignment. While Diffusion Models have been extensively studied for this issue, Masked Generative Transformers exhibit similar limitations but have not been explored in this context. To address this, we propose Unmasking with Contrastive Attention Guidance (UNCAGE), a novel training-free method that improves compositional fidelity by leveraging attention maps to prioritize the unmasking of tokens that clearly represent individual objects. UNCAGE consistently improves performance in both quantitative and qualitative evaluations across multiple benchmarks and metrics, with negligible inference overhead. Our code is available at https://github.com/furiosa-ai/uncage.
Community
UNCAGE is a novel unmasking method that improves compositional text-to-image generation in Masked Generative Transformers by using contrastive attention guidance to prioritize unmasking tokens that distinctly represent individual objects.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OptiPrune: Boosting Prompt-Image Consistency with Attention-Guided Noise and Dynamic Token Selection (2025)
- Resurrect Mask AutoRegressive Modeling for Efficient and Scalable Image Generation (2025)
- DC-AR: Efficient Masked Autoregressive Image Generation with Deep Compression Hybrid Tokenizer (2025)
- Make It Efficient: Dynamic Sparse Attention for Autoregressive Image Generation (2025)
- MADI: Masking-Augmented Diffusion with Inference-Time Scaling for Visual Editing (2025)
- PromptSafe: Gated Prompt Tuning for Safe Text-to-Image Generation (2025)
- Local Prompt Adaptation for Style-Consistent Multi-Object Generation in Diffusion Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper