Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents
Abstract
Bifrost-1 integrates pretrained multimodal LLMs and diffusion models using patch-level CLIP embeddings to enable efficient high-fidelity image generation with strong multimodal reasoning.
There is growing interest in integrating high-fidelity visual synthesis capabilities into large language models (LLMs) without compromising their strong reasoning capabilities. Existing methods that directly train LLMs or bridge LLMs and diffusion models usually suffer from costly training since the backbone LLMs have not seen image representations during pretraining. We present Bifrost-1, a unified framework that bridges pretrained multimodal LLMs (MLLMs) and diffusion models using patch-level CLIP image embeddings as latent variables, which are natively aligned with the MLLM's CLIP visual encoder. These patch-level image embeddings are integrated into the diffusion model with a lightweight adaptation of its ControlNet. To retain the original multimodal reasoning capabilities of MLLMs, we equip the MLLM with a visual generation branch initialized from the original MLLM parameters when predicting the patch-level image embeddings. By seamlessly integrating pretrained MLLMs and diffusion models with patch-level CLIP latents, our framework enables high-fidelity controllable image generation with significant training efficiency. Our experiments demonstrate that Bifrost-1 achieves comparable or better performance than previous methods in terms of visual fidelity and multimodal understanding, with substantially lower compute during training. We also provide comprehensive ablation studies showing the effectiveness of our design choices.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations (2025)
- UniCode2: Cascaded Large-scale Codebooks for Unified Multimodal Understanding and Generation (2025)
- MENTOR: Efficient Multimodal-Conditioned Tuning for Autoregressive Vision Generation Models (2025)
- MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings (2025)
- MotionGPT3: Human Motion as a Second Modality (2025)
- Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation from Diffusion Models (2025)
- Show-o2: Improved Native Unified Multimodal Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper