AimBot: A Simple Auxiliary Visual Cue to Enhance Spatial Awareness of Visuomotor Policies
Abstract
AimBot, a lightweight visual augmentation technique, improves visuomotor policy learning in robotic manipulation by overlaying spatial cues onto RGB images, enhancing performance in both simulation and real-world settings.
In this paper, we propose AimBot, a lightweight visual augmentation technique that provides explicit spatial cues to improve visuomotor policy learning in robotic manipulation. AimBot overlays shooting lines and scope reticles onto multi-view RGB images, offering auxiliary visual guidance that encodes the end-effector's state. The overlays are computed from depth images, camera extrinsics, and the current end-effector pose, explicitly conveying spatial relationships between the gripper and objects in the scene. AimBot incurs minimal computational overhead (less than 1 ms) and requires no changes to model architectures, as it simply replaces original RGB images with augmented counterparts. Despite its simplicity, our results show that AimBot consistently improves the performance of various visuomotor policies in both simulation and real-world settings, highlighting the benefits of spatially grounded visual feedback.
Community
TL;DR: AimBot is a lightweight visual augmentation technique that provides explicit spatial cues to improve VLA models.
Website: https://aimbot-reticle.github.io/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Video Generators are Robot Policies (2025)
- 4D-VLA: Spatiotemporal Vision-Language-Action Pretraining with Cross-Scene Calibration (2025)
- Evo-0: Vision-Language-Action Model with Implicit Spatial Understanding (2025)
- cVLA: Towards Efficient Camera-Space VLAs (2025)
- MolmoAct: Action Reasoning Models that can Reason in Space (2025)
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge (2025)
- Learning to See and Act: Task-Aware View Planning for Robotic Manipulation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper