Papers
arxiv:2507.02321

Heeding the Inner Voice: Aligning ControlNet Training via Intermediate Features Feedback

Published on Jul 3
· Submitted by ai-alanov on Jul 4
#3 Paper of the day
Authors:
,
,
,

Abstract

InnerControl enforces spatial consistency across all diffusion steps by training lightweight convolutional probes to improve control fidelity and generation quality in text-to-image diffusion models.

AI-generated summary

Despite significant progress in text-to-image diffusion models, achieving precise spatial control over generated outputs remains challenging. ControlNet addresses this by introducing an auxiliary conditioning module, while ControlNet++ further refines alignment through a cycle consistency loss applied only to the final denoising steps. However, this approach neglects intermediate generation stages, limiting its effectiveness. We propose InnerControl, a training strategy that enforces spatial consistency across all diffusion steps. Our method trains lightweight convolutional probes to reconstruct input control signals (e.g., edges, depth) from intermediate UNet features at every denoising step. These probes efficiently extract signals even from highly noisy latents, enabling pseudo ground truth controls for training. By minimizing the discrepancy between predicted and target conditions throughout the entire diffusion process, our alignment loss improves both control fidelity and generation quality. Combined with established techniques like ControlNet++, InnerControl achieves state-of-the-art performance across diverse conditioning methods (e.g., edges, depth).

Community

Paper submitter

We propose InnerControl that improves spatial control in text-to-image diffusion models by enforcing consistency across all denoising steps through lightweight convolutional probes that reconstruct control signals from intermediate features, outperforming prior methods like ControlNet and ControlNet++ when used in combination. Code available at GitHub.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.02321 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.02321 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.02321 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.