Update README.md
Browse files
README.md
CHANGED
@@ -15,15 +15,20 @@ pipeline_tag: robotics
|
|
15 |
## Overview
|
16 |
RoboRefer-2B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets.
|
17 |
|
|
|
|
|
|
|
18 |
RoboRefer-2B-SFT has strong spatial understanding capability and achieves state-of-the-art performance across diverse benchmarks. Given an image with language instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
|
19 |
|
20 |
## Date
|
21 |
This model was trained in June 2025.
|
22 |
|
23 |
|
|
|
24 |
## 📝 Citation
|
25 |
If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
|
26 |
|
|
|
27 |
```
|
28 |
@article{zhou2025roborefer,
|
29 |
title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
|
|
|
15 |
## Overview
|
16 |
RoboRefer-2B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets.
|
17 |
|
18 |
+
|
19 |
+
## How to use
|
20 |
+
|
21 |
RoboRefer-2B-SFT has strong spatial understanding capability and achieves state-of-the-art performance across diverse benchmarks. Given an image with language instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
|
22 |
|
23 |
## Date
|
24 |
This model was trained in June 2025.
|
25 |
|
26 |
|
27 |
+
|
28 |
## 📝 Citation
|
29 |
If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
|
30 |
|
31 |
+
|
32 |
```
|
33 |
@article{zhou2025roborefer,
|
34 |
title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
|