RoboRefer & RefSpatial
Collection
RoboRefer weights, RefSpatial Dataset and RefSpatial-Bench
β’
8 items
β’
Updated
β’
2
This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics
NVILA-2B-Depth serves as the base model for both RoboRefer-2B-Depth-Align and RoboRefer-2B-SFT. It shares the same parameters as NVILA-Lite-2B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively.
This model was created in June 2025.
If you find our code or models useful in your work, please cite our paper:
@article{zhou2025roborefer,
title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
journal={arXiv preprint arXiv:2506.04308},
year={2025}
}
Base model
Efficient-Large-Model/NVILA-Lite-2B