---
license: apache-2.0
library_name: transformers
pipeline_tag: robotics
base_model:
- Efficient-Large-Model/NVILA-Lite-2B
---
# 🌏 RoboRefer
> This is the official checkpoint of our work: **RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics**
## Overview
NVILA-2B-Depth serves as the base model for both RoboRefer-2B-Depth-Align and RoboRefer-2B-SFT. It shares the same parameters as NVILA-Lite-2B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively.
## Resources for More Information
- Paper: https://arxiv.org/abs/2506.04308
- Code: https://github.com/Zhoues/RoboRefer
- Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial
- Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench
- Website: https://zhoues.github.io/RoboRefer/
## Date
This model was created in June 2025.
## 📝 Citation
If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
```
@article{zhou2025roborefer,
title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
journal={arXiv preprint arXiv:2506.04308},
year={2025}
}
```