Zhoues commited on
Commit
fbc026d
·
verified ·
1 Parent(s): 8e8b508

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -2,6 +2,8 @@
2
  license: apache-2.0
3
  library_name: transformers
4
  pipeline_tag: robotics
 
 
5
  ---
6
 
7
  # 🌏 RoboRefer
@@ -12,6 +14,8 @@ pipeline_tag: robotics
12
  > This is the official checkpoint of our work: **RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics**
13
 
14
 
 
 
15
  ## Overview
16
  RoboRefer-2B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets.
17
 
@@ -20,6 +24,13 @@ RoboRefer-2B-SFT is an open-source vision-language model that is instruction-tun
20
 
21
  RoboRefer-2B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
22
 
 
 
 
 
 
 
 
23
  ## Date
24
  This model was trained in June 2025.
25
 
@@ -36,4 +47,4 @@ If you find our code or models useful in your work, please cite [our paper](http
36
  journal={arXiv preprint arXiv:2506.04308},
37
  year={2025}
38
  }
39
- ```
 
2
  license: apache-2.0
3
  library_name: transformers
4
  pipeline_tag: robotics
5
+ base_model:
6
+ - Efficient-Large-Model/NVILA-Lite-2B
7
  ---
8
 
9
  # 🌏 RoboRefer
 
14
  > This is the official checkpoint of our work: **RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics**
15
 
16
 
17
+
18
+
19
  ## Overview
20
  RoboRefer-2B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets.
21
 
 
24
 
25
  RoboRefer-2B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
26
 
27
+
28
+ ## Resources for More Information
29
+ - Paper: https://arxiv.org/abs/2506.04308
30
+ - Code: https://github.com/Zhoues/RoboRefer
31
+ - Website: https://zhoues.github.io/RoboRefer/
32
+
33
+
34
  ## Date
35
  This model was trained in June 2025.
36
 
 
47
  journal={arXiv preprint arXiv:2506.04308},
48
  year={2025}
49
  }
50
+ ```