delinqu
commited on
Commit
·
d6e7ac4
1
Parent(s):
0cde4f0
☘️ update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,191 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
size_categories:
|
3 |
+
- 100B<n<1T
|
4 |
+
---
|
5 |
+
|
6 |
+
# Dataset Card for EN-SLAM (Implicit Event-RGBD Neural SLAM, CVPR24)
|
7 |
+
|
8 |
+
<p align="center">
|
9 |
+
<img src="./asset/dataset.png" width="80%" title="Overview of OmniSim and InterReal Dataset">
|
10 |
+
</p>
|
11 |
+
|
12 |
+
## Dataset Description
|
13 |
+
|
14 |
+
This repository contains the dataset for the paper `Implicit Event-RGBD Neural SLAM`, the first event-RGBD implicit neural SLAM framework that efficiently leverages event stream and RGBD to overcome challenges in extreme motion blur and lighting variation scenes. **DEV-Indoors** is obtained through Blender [6] and simulator [14], covering normal, motion blur, and dark scenes, providing 9 subsets with RGB images, depth maps, event streams, meshes, and trajectories. **DEV-Reals** is captured from real scenes, providing 8 challenging subsets under motion blur and lighting variation.
|
15 |
+
|
16 |
+
### Dataset Sources
|
17 |
+
|
18 |
+
- [Paper](https://arxiv.org/abs/2311.11013)
|
19 |
+
- [Project Page](https://delinqu.github.io/EN-SLAM)
|
20 |
+
|
21 |
+
## Update
|
22 |
+
|
23 |
+
- [x] Release DEV-Indoors and DEV-Reals Dataset.
|
24 |
+
- [x] Add Dataset Usage Instruction.
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
|
28 |
+
- Download and Extract (`export HF_ENDPOINT=https://hf-mirror.com` would be helpful if you are blocked)
|
29 |
+
|
30 |
+
```bash
|
31 |
+
huggingface-cli download --resume-download --local-dir-use-symlinks False delinqu/EN-SLAM-Dataset --local-dir EN-SLAM-Dataset
|
32 |
+
|
33 |
+
# Alternatively, you can use git clone the repo
|
34 |
+
git lfs install
|
35 |
+
git clone https://huggingface.co/datasets/delinqu/EN-SLAM-Dataset
|
36 |
+
```
|
37 |
+
|
38 |
+
If you only want to download a specific subset, use the following code:
|
39 |
+
|
40 |
+
```python
|
41 |
+
from huggingface_hub import hf_hub_download
|
42 |
+
|
43 |
+
hf_hub_download(
|
44 |
+
repo_id="delinqu/EN-SLAM-Dataset",
|
45 |
+
filename="DEV-Indoors_config.tar.gz",
|
46 |
+
repo_type="dataset",
|
47 |
+
local_dir=".",
|
48 |
+
)
|
49 |
+
```
|
50 |
+
|
51 |
+
After downloading, you can use the following script to extract the `tar.gz`, under the project root dir. The python script just simple unzip all the tar.gz files, feel free to customise:
|
52 |
+
|
53 |
+
```bash
|
54 |
+
python scripts/extract_dataset.py
|
55 |
+
```
|
56 |
+
|
57 |
+
The extracted Dataset will be in the following structure:
|
58 |
+
|
59 |
+
<p align="center">
|
60 |
+
<img src="./asset/structure.png" width="80%" title="structure of Extracted Dataset">
|
61 |
+
</p>
|
62 |
+
|
63 |
+
- Use a Dataloader
|
64 |
+
|
65 |
+
Please refer to `datasets/dataset.py` for dataloader of `DEVIndoors` and `DEVReals`.
|
66 |
+
|
67 |
+
- Evaluation
|
68 |
+
|
69 |
+
To construct the evaluation subsets, we use `frustum + occlusion + virtual cameras` that introduce extra virtual views to cover the occluded parts inside the region of interest in CoSLAM. The evaluation datasets are generated by randomly conducting 2000 poses and depths in Blender for each scene. We further manually add extra virtual views to cover all scenes. This process helps to evaluate the view synthesis and hole-filling capabilities of the algorithm. Please follow the [neural_slam_eval](https://github.com/JingwenWang95/neural_slam_eval) with our groundtruth pointclouds and images.
|
70 |
+
|
71 |
+
## Dataset Format
|
72 |
+
|
73 |
+
### DEV-Indoors Dataset
|
74 |
+
|
75 |
+
* data structure
|
76 |
+
|
77 |
+
``` bash
|
78 |
+
├── groundtruth # evaluation metadata: pose, rgb, depth, mesh
|
79 |
+
│ ├── apartment
|
80 |
+
│ ├── room
|
81 |
+
│ └── workshop
|
82 |
+
├── seq001_room_norm # normal sequence: event, rgb, depth, pose, camera_para
|
83 |
+
│ ├── camera_para.txt
|
84 |
+
│ ├── depth
|
85 |
+
│ ├── depth_mm
|
86 |
+
│ ├── event.zip
|
87 |
+
│ ├── pose
|
88 |
+
│ ├── rgb
|
89 |
+
│ ├── timestamps.txt
|
90 |
+
│ └── seq001_room_norm.yaml
|
91 |
+
├── seq002_room_blur # blur sequence: event, rgb, depth, pose, camera_para
|
92 |
+
│ ├── depth
|
93 |
+
│ ├── depth_mm
|
94 |
+
│ ├── event.zip
|
95 |
+
│ ├── pose
|
96 |
+
│ ├── rgb
|
97 |
+
│ ├── timestamps.txt
|
98 |
+
│ └── seq002_room_blur.yaml
|
99 |
+
├── seq003_room_dark # dark sequence: event, rgb, depth, pose, camera_para
|
100 |
+
│ ├── depth
|
101 |
+
│ ├── depth_mm
|
102 |
+
│ ├── event.zip
|
103 |
+
│ ├── pose
|
104 |
+
│ ├── rgb
|
105 |
+
│ ├── timestamps.txt
|
106 |
+
│ └── seq003_room_dark.yaml
|
107 |
+
...
|
108 |
+
└── seq009_workshop_dark
|
109 |
+
├── depth
|
110 |
+
├── depth_mm
|
111 |
+
├── event.zip
|
112 |
+
├── pose
|
113 |
+
├── rgb
|
114 |
+
├── timestamps.txt
|
115 |
+
└── seq009_workshop_dark.yaml
|
116 |
+
```
|
117 |
+
|
118 |
+
* model: 3D model of the room, apartment, and workshop scene
|
119 |
+
|
120 |
+
<p align="center">
|
121 |
+
<img src="./asset/model.png" width="80%" title="The models and trajectories of the DEV-Indoors dataset in Blender">
|
122 |
+
</p>
|
123 |
+
|
124 |
+
|
125 |
+
```
|
126 |
+
model
|
127 |
+
├── apartment
|
128 |
+
│ ├── apartment.blend
|
129 |
+
│ ├── hdri
|
130 |
+
│ ├── room.blend
|
131 |
+
│ ├── supp
|
132 |
+
│ └── Textures
|
133 |
+
└── workshop
|
134 |
+
├── hdri
|
135 |
+
├── Textures
|
136 |
+
└── workshop.blend
|
137 |
+
```
|
138 |
+
|
139 |
+
* scripts: scripts for data generation and visulization.
|
140 |
+
|
141 |
+
``` bash
|
142 |
+
scripts
|
143 |
+
├── camera_intrinsic.py # blender camera intrinsic generation tool.
|
144 |
+
├── camera_pose.py # blender camera pose generation tool.
|
145 |
+
├── npzs_to_frame.py # convert npz to frame.
|
146 |
+
├── read_ev.py # read event data.
|
147 |
+
└── viz_ev_frame.py # visualize event and frame.
|
148 |
+
```
|
149 |
+
|
150 |
+
### DEV-Reals Dataset
|
151 |
+
|
152 |
+
``` bash
|
153 |
+
DEV-Reals
|
154 |
+
├── devreals.yaml # dataset metadata: camera parameters, cam2davis transformation matrix
|
155 |
+
|
|
156 |
+
├── enslamdata1 # sequence: davis346, pose, rgbd
|
157 |
+
│ ├── davis346
|
158 |
+
│ ├── pose
|
159 |
+
│ └── rgbd
|
160 |
+
├── enslamdata1.bag
|
161 |
+
├── enslamdata2
|
162 |
+
│ ├── davis346
|
163 |
+
│ ├── pose
|
164 |
+
│ └── rgbd
|
165 |
+
├── enslamdata2.bag
|
166 |
+
├── enslamdata3
|
167 |
+
│ ├── davis346
|
168 |
+
│ ├── pose
|
169 |
+
│ └── rgbd
|
170 |
+
├── enslamdata3.bag
|
171 |
+
...
|
172 |
+
├── enslamdata8
|
173 |
+
│ ├── davis346
|
174 |
+
│ ├── pose
|
175 |
+
│ └── rgbd
|
176 |
+
└── enslamdata8.bag
|
177 |
+
```
|
178 |
+
|
179 |
+
|
180 |
+
## Citation
|
181 |
+
|
182 |
+
If you use this work or find it helpful, please consider citing:
|
183 |
+
|
184 |
+
```bibtex
|
185 |
+
@inproceedings{qu2023implicit,
|
186 |
+
title={Implicit Event-RGBD Neural SLAM},
|
187 |
+
author={Delin Qu, Chi Yan, Dong Wang, Jie Yin, Qizhi Chen, Yiting Zhang, Dan Xu and Bin Zhao and Xuelong Li},
|
188 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
189 |
+
year={2024}
|
190 |
+
}
|
191 |
+
```
|