Robotics
Transformers
Safetensors
llava_llama
Zhoues commited on
Commit
bd04070
·
verified ·
1 Parent(s): 796c1d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -1,3 +1,61 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: robotics
5
+ base_model:
6
+ - Efficient-Large-Model/NVILA-8B
7
+ ---
8
+
9
+ # 🌏 RoboRefer
10
+
11
+
12
+ <a href="https://zhoues.github.io/RoboRefer"><img src="https://img.shields.io/badge/%F0%9F%8F%A0%20Project-Homepage-blue" alt="HomePage"></a>
13
+ <a href="https://arxiv.org/abs/2506.04308"><img src="https://img.shields.io/badge/arXiv%20paper-2506.04308-b31b1b.svg?logo=arxiv" alt="arXiv"></a>
14
+ <a href="https://github.com/Zhoues/RoboRefer"><img src="https://img.shields.io/badge/Code-RoboRefer-black?logo=github" alt="Project Homepage"></a>
15
+
16
+
17
+ <a href="https://huggingface.co/datasets/JingkunAn/RefSpatial"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-RefSpatial%20Dataset-brightgreen" alt="Dataset"></a>
18
+ <a href="https://huggingface.co/datasets/JingkunAn/RefSpatial-Bench"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Benchmark-RefSpatial%20Bench-green" alt="Benchmark"></a>
19
+ <a href="https://huggingface.co/collections/Zhoues/roborefer-and-refspatial-6857c97848fab02271310b89"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-RoboRefer%20Model-yellow" alt="Weights"></a>
20
+
21
+
22
+ > This is the official checkpoint of our work: **RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics**
23
+
24
+
25
+
26
+
27
+
28
+ ## Overview
29
+ RoboRefer-8B-SFT is an open-source vision-language model that is instruction-tuned on a mixture of RefSpatial datasets, instruction tuning, and referring datasets.
30
+
31
+
32
+ ## How to use
33
+
34
+ RoboRefer-8B-SFT has strong spatial understanding capability and achieves SOTA performance across diverse benchmarks. Given an image with instructions, it can not only answer your questions in both qualitative and quantitative ways using its spatial knowledge, but also output precise points for spatial referring to guide robotic control. For more details, please visit our [official repo](https://github.com/Zhoues/RoboRefer).
35
+
36
+
37
+ ## Resources for More Information
38
+ - Paper: https://arxiv.org/abs/2506.04308
39
+ - Code: https://github.com/Zhoues/RoboRefer
40
+ - Dataset: https://huggingface.co/datasets/JingkunAn/RefSpatial
41
+ - Benchmark: https://huggingface.co/datasets/BAAI/RefSpatial-Bench
42
+ - Website: https://zhoues.github.io/RoboRefer/
43
+
44
+
45
+ ## Date
46
+ This model was trained in June 2025.
47
+
48
+
49
+
50
+ ## 📝 Citation
51
+ If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
52
+
53
+
54
+ ```
55
+ @article{zhou2025roborefer,
56
+ title={RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics},
57
+ author={Zhou, Enshen and An, Jingkun and Chi, Cheng and Han, Yi and Rong, Shanyu and Zhang, Chi and Wang, Pengwei and Wang, Zhongyuan and Huang, Tiejun and Sheng, Lu and others},
58
+ journal={arXiv preprint arXiv:2506.04308},
59
+ year={2025}
60
+ }
61
+ ```