RoboSense / README.md
suhaisheng0527's picture
Update README.md
d48a202 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - object-detection
  - robotics
tags:
  - Autonomous-Driving
  - Egocentric-Perception
  - Crowded-Unstructured-Environments

robosense

Large-scale Dataset and Benchmark for Egocentric Robot Perception and Navigation in Crowded and Unstructured Environments

License

Description

  • RoboSense is a large-scale multimodal dataset constructed to facilitate egocentric robot perception capabilities especially in crowded and unstructured environments.
  • It contains more than 133K synchronized data of 3 main types of sensors (Camera, LiDAR and Fisheye), with 1.4M 3D bounding box and IDs annotated in the full $360^{\circ}$ view, forming 216K trajectories across 7.6K temporal sequences.
  • It has $270\times$ and $18\times$ as many annotations of surrounding obstacles within near ranges as the previous datasets collected for autonomous driving scenarios such as KITTI and nuScenes.
  • Based on RoboSense, we formulate 6 benchmarks of both perception and prediction tasks to facilitate the future research development.

For more information, please visit our GitHub repository: https://github.com/suhaisheng/RoboSense.

Data Format

1. Each image file named by the following format:

image_{trainval/test}/processed_data_{date}/images/{cam_id}/{timestamp}.jpg

where #cam_id ranges from 0-7, with 0-3 indicating Camera image folder and 4-7 indicating Fisheye image folder.

2. Each Hesai pointcloud named by the following format:

lidar_occ_{trainval/test}/processed_data_{date}/hs64/{cam_id}/{timestamp}.bin

3. Each Livox pointcloud named by the following format:

lidar_occ_{trainval/test}/processed_data_{date}/livox/{cam_id}/{timestamp}.pcd

4. Each Occupancy annotation file named by the following format:

lidar_occ_{trainval/test}/processed_data_{date}/occ/{cam_id}/{timestamp}.npz

5. For the training/validation splits containing 3D box/trajectory annotations/calibrations, please refer to the file path:

RoboSense/splits/robosense_local_{train/val}.pkl

License

All assets and code within this repo are under the CC BY-NC-SA 4.0 unless specified otherwise.

Citation

If you find RoboSense is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@inproceedings{su2025robosense,
  title={RoboSense: Large-scale Dataset and Benchmark for Egocentric Robot Perception and Navigation in Crowded and Unstructured Environments},
  author={Su, Haisheng and Song, Feixiang and Ma, Cong and Wu, Wei and Yan, Junchi},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2025}
}