Robotics
Transformers
Safetensors
llava_llama

🌏 RoboRefer

HomePage arXiv Project Homepage

Dataset Benchmark Weights

This is the official checkpoint of our work: RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics

Overview

NVILA-2B-Depth serves as the base model for both RoboRefer-2B-Depth-Align and RoboRefer-2B-SFT. It shares the same parameters as NVILA-Lite-2B, with the addition of a depth encoder and a depth projector, both initialized from the image encoder and image projector, respectively.

Resources for More Information

Date

This model was created in June 2025.

πŸ“ Citation

If you find our code or models useful in your work, please cite our paper:

@article{zhou2025roborefer,
    title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
    author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
    journal={arXiv preprint arXiv:2506.04308},
    year={2025}
}
Downloads last month
8
Video Preview
loading

Model tree for Zhoues/NVILA-2B-Depth

Finetuned
(3)
this model

Collection including Zhoues/NVILA-2B-Depth