DreamCube: 3D Panorama Generation via Multi-plane Synchronization
Abstract
Multi-plane synchronization extends 2D foundation models to 3D panorama generation, introducing DreamCube to achieve diverse appearances and accurate geometry.
3D panorama synthesis is a promising yet challenging task that demands high-quality and diverse visual appearance and geometry of the generated omnidirectional content. Existing methods leverage rich image priors from pre-trained 2D foundation models to circumvent the scarcity of 3D panoramic data, but the incompatibility between 3D panoramas and 2D single views limits their effectiveness. In this work, we demonstrate that by applying multi-plane synchronization to the operators from 2D foundation models, their capabilities can be seamlessly extended to the omnidirectional domain. Based on this design, we further introduce DreamCube, a multi-plane RGB-D diffusion model for 3D panorama generation, which maximizes the reuse of 2D foundation model priors to achieve diverse appearances and accurate geometry while maintaining multi-view consistency. Extensive experiments demonstrate the effectiveness of our approach in panoramic image generation, panoramic depth estimation, and 3D scene generation.
Community
Looking for GPU support for building an online demo: https://huggingface.co/spaces/huggingface/InferenceSupport/discussions/2602
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Advancing high-fidelity 3D and Texture Generation with 2.5D latents (2025)
- UniGeo: Taming Video Diffusion for Unified Consistent Geometry Estimation (2025)
- SceneCompleter: Dense 3D Scene Completion for Generative Novel View Synthesis (2025)
- EX-4D: EXtreme Viewpoint 4D Video Synthesis via Depth Watertight Mesh (2025)
- Step1X-3D: Towards High-Fidelity and Controllable Generation of Textured 3D Assets (2025)
- NOVA3D: Normal Aligned Video Diffusion Model for Single Image to 3D Generation (2025)
- Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper