Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors
Abstract
A novel Video-3D Geometry Large Language Model (VG LLM) extracts 3D information directly from video sequences to enhance 3D scene understanding without additional 3D data, achieving competitive results in various tasks.
Previous research has investigated the application of Multimodal Large Language Models (MLLMs) in understanding 3D scenes by interpreting them as videos. These approaches generally depend on comprehensive 3D data inputs, such as point clouds or reconstructed Bird's-Eye View (BEV) maps. In our research, we advance this field by enhancing the capability of MLLMs to understand and reason in 3D spaces directly from video data, without the need for additional 3D input. We propose a novel and efficient method, the Video-3D Geometry Large Language Model (VG LLM). Our approach employs a 3D visual geometry encoder that extracts 3D prior information from video sequences. This information is integrated with visual tokens and fed into the MLLM. Extensive experiments have shown that our method has achieved substantial improvements in various tasks related to 3D scene understanding and spatial reasoning, all directly learned from video sources. Impressively, our 4B model, which does not rely on explicit 3D data inputs, achieves competitive results compared to existing state-of-the-art methods, and even surpasses the Gemini-1.5-Pro in the VSI-Bench evaluations.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence (2025)
- Extending Large Vision-Language Model for Diverse Interactive Tasks in Autonomous Driving (2025)
- 3D CoCa: Contrastive Learners are 3D Captioners (2025)
- Zero-Shot 3D Visual Grounding from Vision-Language Models (2025)
- Detect Anything 3D in the Wild (2025)
- SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning (2025)
- VLM-3R: Vision-Language Models Augmented with Instruction-Aligned 3D Reconstruction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper