Update README from hunarbatra/SpatialThinker-3B

#4
by hunarbatra - opened
Files changed (1) hide show
  1. README.md +115 -5
README.md CHANGED
@@ -1,7 +1,117 @@
1
  ---
2
- datasets:
3
- - OX-PIXL/STVQA-7K
4
- base_model:
5
- - Qwen/Qwen2.5-VL-3B-Instruct
 
 
 
 
 
 
 
6
  ---
7
- Paper: https://arxiv.org/abs/2511.07403
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - spatial-reasoning
7
+ - multimodal
8
+ - vision-language
9
+ - scene-graph
10
+ - reinforcement-learning
11
+ base_model: Qwen/Qwen2.5-VL-3B-Instruct
12
+ pipeline_tag: image-text-to-text
13
  ---
14
+
15
+ # SpatialThinker-3B
16
+
17
+ <p align="center">
18
+ <a href="https://arxiv.org/abs/2511.07403">
19
+ <img src="https://img.shields.io/badge/arXiv-2511.07403-b31b1b.svg" alt="arXiv">
20
+ </a>
21
+ <a href="https://hunarbatra.com/SpatialThinker">
22
+ <img src="https://img.shields.io/badge/🌐%20Project%20Page-blue.svg" alt="Project Page">
23
+ </a>
24
+ <a href="https://github.com/hunarbatra/SpatialThinker">
25
+ <img src="https://img.shields.io/badge/GitHub-Repository-black.svg" alt="GitHub">
26
+ </a>
27
+ </p>
28
+
29
+ **SpatialThinker-3B** is a 3D-aware multimodal large language model (MLLM) trained with reinforcement learning to integrate structured spatial grounding with multi-step reasoning. The model simulates human-like spatial perception by constructing a scene graph of task-relevant objects and spatial relations, and reasoning towards an answer via dense spatial rewards.
30
+
31
+ ## Model Description
32
+
33
+ - **Base Model**: Qwen2.5-VL-3B-Instruct
34
+ - **Training**: GRPO (Group Relative Policy Optimization) with dense spatial rewards
35
+ - **Training Data**: STVQA-7K (7,587 spatial VQA samples)
36
+ - **Authors**: Hunar Batra, Haoqin Tu, Hardy Chen, Yuanze Lin, Cihang Xie, Ronald Clark
37
+ - **Institutions**: University of Oxford, UC Santa Cruz
38
+
39
+ ## Key Features
40
+
41
+ - **Structured Spatial Reasoning**: Constructs question-focused scene subgraphs with objects, bounding boxes, and relations
42
+ - **Dense Spatial Rewards**: Multi-objective reward function enforcing format, count, accuracy, and spatial grounding
43
+ - **9 Spatial Reasoning Categories**: Relations, reach, size, orientation, instance location, depth, distance, count, and existence
44
+ - **Outperforms GPT-4o**: On spatial understanding benchmarks while using only 7K training samples
45
+
46
+ ## Inference Template
47
+
48
+ Use the following template for inference:
49
+
50
+ ```
51
+ You FIRST observe the image in <observe> </observe> tags, then visualise the relevant scene graph in <scene> </scene> tags, followed by thinking about the reasoning process as an internal monologue within <think> </think> tags and then provide the final answer. The final answer MUST BE put within <answer> </answer> tags, and only return the final choice including the correct option and answer within the answer tags, e.g., <answer> (A) cat </answer>.
52
+
53
+ Image size: {Width} x {Height}
54
+ ```
55
+
56
+ ## Usage
57
+
58
+ ```python
59
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
60
+ from PIL import Image
61
+
62
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
63
+ "OX-PIXL/SpatialThinker-3B",
64
+ torch_dtype="auto",
65
+ device_map="auto"
66
+ )
67
+ processor = AutoProcessor.from_pretrained("OX-PIXL/SpatialThinker-3B")
68
+
69
+ # Load image
70
+ image = Image.open("your_image.jpg")
71
+ width, height = image.size
72
+
73
+ # Prepare prompt with template
74
+ template = f"""You FIRST observe the image in <observe> </observe> tags, then visualise the relevant scene graph in <scene> </scene> tags, followed by thinking about the reasoning process as an internal monologue within <think> </think> tags and then provide the final answer. The final answer MUST BE put within <answer> </answer> tags, and only return the final choice including the correct option and answer within the answer tags, e.g., <answer> (A) cat </answer>.
75
+
76
+ Image size: {width} x {height}"""
77
+
78
+ question = "Where is the cat relative to the couch? (A) on top of (B) in front of (C) behind (D) beside"
79
+
80
+ messages = [
81
+ {
82
+ "role": "user",
83
+ "content": [
84
+ {"type": "image", "image": image},
85
+ {"type": "text", "text": template + "\n\n" + question},
86
+ ],
87
+ }
88
+ ]
89
+
90
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
91
+ inputs = processor(text=[text], images=[image], return_tensors="pt").to(model.device)
92
+
93
+ generated_ids = model.generate(**inputs, max_new_tokens=1024)
94
+ output = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
95
+ print(output)
96
+ ```
97
+
98
+ ## Citation
99
+
100
+ ```bibtex
101
+ @misc{batra2025spatialthinkerreinforcing3dreasoning,
102
+ title={SpatialThinker: Reinforcing 3D Reasoning in Multimodal LLMs via Spatial Rewards},
103
+ author={Hunar Batra and Haoqin Tu and Hardy Chen and Yuanze Lin and Cihang Xie and Ronald Clark},
104
+ year={2025},
105
+ eprint={2511.07403},
106
+ archivePrefix={arXiv},
107
+ primaryClass={cs.CV},
108
+ url={https://arxiv.org/abs/2511.07403},
109
+ }
110
+ ```
111
+
112
+ ## Links
113
+
114
+ - πŸ“„ **Paper**: [arXiv:2511.07403](https://arxiv.org/abs/2511.07403)
115
+ - 🌐 **Project Page**: [hunarbatra.com/SpatialThinker](https://hunarbatra.com/SpatialThinker)
116
+ - πŸ’» **GitHub**: [github.com/hunarbatra/SpatialThinker](https://github.com/hunarbatra/SpatialThinker)
117
+ - πŸ€— **Dataset**: [OX-PIXL/STVQA-7K](https://huggingface.co/datasets/OX-PIXL/STVQA-7K)