XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation
Abstract
XVerse enhances text-to-image generation by enabling precise and independent control over multiple subjects using token-specific text-stream modulation, improving image coherence and fidelity.
Achieving fine-grained control over subject identity and semantic attributes (pose, style, lighting) in text-to-image generation, particularly for multiple subjects, often undermines the editability and coherence of Diffusion Transformers (DiTs). Many approaches introduce artifacts or suffer from attribute entanglement. To overcome these challenges, we propose a novel multi-subject controlled generation model XVerse. By transforming reference images into offsets for token-specific text-stream modulation, XVerse allows for precise and independent control for specific subject without disrupting image latents or features. Consequently, XVerse offers high-fidelity, editable multi-subject image synthesis with robust control over individual subject characteristics and semantic attributes. This advancement significantly improves personalized and complex scene generation capabilities.
Community
XVerse introduces a novel approach to multi-subject image synthesis, offering precise and independent control over individual subjects without disrupting the overall image latents or features. We achieve this by transforming reference images into offsets for token-specific text-stream modulation.
This innovation enables high-fidelity, editable image generation where you can robustly control both individual subject characteristics (identity) and their semantic attributes. XVerse significantly enhances capabilities for personalized and complex scene generation.
Project page: https://bytedance.github.io/XVerse/
Github: https://github.com/bytedance/XVerse
HuggingFace: https://huggingface.co/ByteDance/XVerse
would be awesome to have the demo on Spaces too 🔥
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper