Papers
arxiv:2506.04308

RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics

Published on Jun 4
· Submitted by Zhoues on Jun 6
Authors:
,
,
,
,
,
,
,
,
,

Abstract

RoboRefer, a 3D-aware vision language model, enhances spatial understanding and multi-step reasoning in embodied robots through supervised and reinforcement fine-tuning, using the RefSpatial dataset and RefSpatial-Bench benchmark.

AI-generated summary

Spatial referring is a fundamental capability of embodied robots to interact with the 3D physical world. However, even with the powerful pretrained vision language models (VLMs), recent approaches are still not qualified to accurately understand the complex 3D scenes and dynamically reason about the instruction-indicated locations for interaction. To this end, we propose RoboRefer, a 3D-aware VLM that can first achieve precise spatial understanding by integrating a disentangled but dedicated depth encoder via supervised fine-tuning (SFT). Moreover, RoboRefer advances generalized multi-step spatial reasoning via reinforcement fine-tuning (RFT), with metric-sensitive process reward functions tailored for spatial referring tasks. To support SFT and RFT training, we introduce RefSpatial, a large-scale dataset of 20M QA pairs (2x prior), covering 31 spatial relations (vs. 15 prior) and supporting complex reasoning processes (up to 5 steps). In addition, we introduce RefSpatial-Bench, a challenging benchmark filling the gap in evaluating spatial referring with multi-step reasoning. Experiments show that SFT-trained RoboRefer achieves state-of-the-art spatial understanding, with an average success rate of 89.6%. RFT-trained RoboRefer further outperforms all other baselines by a large margin, even surpassing Gemini-2.5-Pro by 17.4% in average accuracy on RefSpatial-Bench. Notably, RoboRefer can be integrated with various control policies to execute long-horizon, dynamic tasks across diverse robots (e,g., UR5, G1 humanoid) in cluttered real-world scenes.

Community

Paper author Paper submitter

Project Page: https://zhoues.github.io/RoboRefer/

We present RoboRefer, the first 3D-aware VLM for multi-step spatial referring with explicit reasoning.

Highlights:

  • RoboRefer first acquires precise spatial understanding via SFT, and further exhibits generalized strong reasoning via RFT.

  • To support SFT and RFT training, we introduce RefSpatial, a large-scale dataset of 20M QA pairs (2x prior), covering 31 spatial relations (vs. 15 prior) and containing complex reasoning processes (up to 5 steps).

  • SFT-trained RoboRefer achieves SOTA spatial understanding, and RFT-trained RoboRefer exhibits generalizable spatial referring under novel spatial relation combinations.

wonderful work

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.04308 in a Space README.md to link it from this page.

Collections including this paper 1