---
size_categories:
- 100B
## Dataset Description
This repository contains the dataset for the paper `Implicit Event-RGBD Neural SLAM`, the first event-RGBD implicit neural SLAM framework that efficiently leverages event stream and RGBD to overcome challenges in extreme motion blur and lighting variation scenes. **DEV-Indoors** is obtained through Blender [6] and simulator [14], covering normal, motion blur, and dark scenes, providing 9 subsets with RGB images, depth maps, event streams, meshes, and trajectories. **DEV-Reals** is captured from real scenes, providing 8 challenging subsets under motion blur and lighting variation.
### Dataset Sources
- [Paper](https://arxiv.org/abs/2311.11013)
- [Project Page](https://delinqu.github.io/EN-SLAM)
## Update
- [x] Release DEV-Indoors and DEV-Reals Dataset.
- [x] Add Dataset Usage Instruction.
## Usage
- Download and Extract (`export HF_ENDPOINT=https://hf-mirror.com` would be helpful if you are blocked)
```bash
huggingface-cli download --resume-download --local-dir-use-symlinks False delinqu/EN-SLAM-Dataset --local-dir EN-SLAM-Dataset
# Alternatively, you can use git clone the repo
git lfs install
git clone https://huggingface.co/datasets/delinqu/EN-SLAM-Dataset
```
If you only want to download a specific subset, use the following code:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(
repo_id="delinqu/EN-SLAM-Dataset",
filename="DEV-Indoors_config.tar.gz",
repo_type="dataset",
local_dir=".",
)
```
After downloading, you can use the following script to extract the `tar.gz`, under the project root dir. The python script just simple unzip all the tar.gz files, feel free to customise:
```bash
python scripts/extract_dataset.py
```
The extracted Dataset will be in the following structure:
- Use a Dataloader
Please refer to `datasets/dataset.py` for dataloader of `DEVIndoors` and `DEVReals`.
- Evaluation
To construct the evaluation subsets, we use `frustum + occlusion + virtual cameras` that introduce extra virtual views to cover the occluded parts inside the region of interest in CoSLAM. The evaluation datasets are generated by randomly conducting 2000 poses and depths in Blender for each scene. We further manually add extra virtual views to cover all scenes. This process helps to evaluate the view synthesis and hole-filling capabilities of the algorithm. Please follow the [neural_slam_eval](https://github.com/JingwenWang95/neural_slam_eval) with our groundtruth pointclouds and images.
## Dataset Format
### DEV-Indoors Dataset
* data structure
``` bash
├── groundtruth # evaluation metadata: pose, rgb, depth, mesh
│ ├── apartment
│ ├── room
│ └── workshop
├── seq001_room_norm # normal sequence: event, rgb, depth, pose, camera_para
│ ├── camera_para.txt
│ ├── depth
│ ├── depth_mm
│ ├── event.zip
│ ├── pose
│ ├── rgb
│ ├── timestamps.txt
│ └── seq001_room_norm.yaml
├── seq002_room_blur # blur sequence: event, rgb, depth, pose, camera_para
│ ├── depth
│ ├── depth_mm
│ ├── event.zip
│ ├── pose
│ ├── rgb
│ ├── timestamps.txt
│ └── seq002_room_blur.yaml
├── seq003_room_dark # dark sequence: event, rgb, depth, pose, camera_para
│ ├── depth
│ ├── depth_mm
│ ├── event.zip
│ ├── pose
│ ├── rgb
│ ├── timestamps.txt
│ └── seq003_room_dark.yaml
...
└── seq009_workshop_dark
├── depth
├── depth_mm
├── event.zip
├── pose
├── rgb
├── timestamps.txt
└── seq009_workshop_dark.yaml
```
* model: 3D model of the room, apartment, and workshop scene
```
model
├── apartment
│ ├── apartment.blend
│ ├── hdri
│ ├── room.blend
│ ├── supp
│ └── Textures
└── workshop
├── hdri
├── Textures
└── workshop.blend
```
* scripts: scripts for data generation and visulization.
``` bash
scripts
├── camera_intrinsic.py # blender camera intrinsic generation tool.
├── camera_pose.py # blender camera pose generation tool.
├── npzs_to_frame.py # convert npz to frame.
├── read_ev.py # read event data.
└── viz_ev_frame.py # visualize event and frame.
```
### DEV-Reals Dataset
``` bash
DEV-Reals
├── devreals.yaml # dataset metadata: camera parameters, cam2davis transformation matrix
|
├── enslamdata1 # sequence: davis346, pose, rgbd
│ ├── davis346
│ ├── pose
│ └── rgbd
├── enslamdata1.bag
├── enslamdata2
│ ├── davis346
│ ├── pose
│ └── rgbd
├── enslamdata2.bag
├── enslamdata3
│ ├── davis346
│ ├── pose
│ └── rgbd
├── enslamdata3.bag
...
├── enslamdata8
│ ├── davis346
│ ├── pose
│ └── rgbd
└── enslamdata8.bag
```
## Citation
If you use this work or find it helpful, please consider citing:
```bibtex
@inproceedings{qu2023implicit,
title={Implicit Event-RGBD Neural SLAM},
author={Delin Qu, Chi Yan, Dong Wang, Jie Yin, Qizhi Chen, Yiting Zhang, Dan Xu and Bin Zhao and Xuelong Li},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}
```