Papers
arxiv:2504.16064

Boosting Generative Image Modeling via Joint Image-Feature Synthesis

Published on Apr 22
· Submitted by Sta8is on Apr 25
Authors:
,
,

Abstract

Latent diffusion models (LDMs) dominate high-quality image generation, yet integrating representation learning with generative modeling remains a challenge. We introduce a novel generative image modeling framework that seamlessly bridges this gap by leveraging a diffusion model to jointly model low-level image latents (from a variational autoencoder) and high-level semantic features (from a pretrained self-supervised encoder like DINO). Our latent-semantic diffusion approach learns to generate coherent image-feature pairs from pure noise, significantly enhancing both generative quality and training efficiency, all while requiring only minimal modifications to standard Diffusion Transformer architectures. By eliminating the need for complex distillation objectives, our unified design simplifies training and unlocks a powerful new inference strategy: Representation Guidance, which leverages learned semantics to steer and refine image generation. Evaluated in both conditional and unconditional settings, our method delivers substantial improvements in image quality and training convergence speed, establishing a new direction for representation-aware generative modeling.

Community

Paper author Paper submitter
  1. ReDi (Representation Diffusion) is a new generative approach that leverages a diffusion model to jointly capture:
    – Low-level image details (via VAE latents)
    – High-level semantic features (via DINOv2)

  2. The result?
    🔗 A powerful new method for generative image modeling that bridges generation and representation learning.
    ⚡️Brings massive gains in performance/training efficiency and a new paradigm for representation-aware generative modeling.

  3. ReDi builds on the insight that some latent representations are inherently easier to model, enabling a unified dual-space diffusion approach that generates coherent image–feature pairs from pure noise.

  4. Integrating ReDi into DiT/SiT-style architectures is seamless:
    🔹Apply noise to both image latents and semantic features
    🔹Fuse them into one token sequence
    🔹Denoise both with standard DiT/SiT
    That’s it.

  5. We explore two ways to fuse tokens for image latents & features:
    🔹Merged Tokens (MR): Efficient, keeps token count constant
    🔹Separate Tokens (SP): More expressive, ~2x compute
    Both boost performance, but MR hits the sweet spot for speed vs. quality.

  6. ReDi requires no extra distillation losses, just pure diffusion, significantly simplifying training. Plus, it unlocks Representation Guidance (RG), a new inference strategy that uses learned semantics to steer and refine image generation. 🎯

  7. Training speed? Massive improvements for both DiT and SiT:
    ~23x faster convergence than baseline DiT/SiT.
    ~6x faster than REPA. 🚀

  8. ReDi delivers delivers state-of-the-art results with exceptional generation performance, across the board. 🔥

  9. Unconditional generation gets a huge upgrade too. ReDi + Representation Guidance (RG) nearly closes the gap with conditional models. E.g., unconditional DiT-XL/2 with ReDi+RG hits FID 22.6, close to class-conditioned DiT-XL’s FID 19.5!

  10. We apply PCA to DINOv2 to retain expressivity without dominating model capacity. Just a few PCs suffice to significantly boost generative performance.

Paper: https://arxiv.org/abs/2504.16064
Code: https://github.com/zelaki/ReDi

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.16064 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.16064 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.16064 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.