text
stringclasses
49 values
650 383
-1
167
171
162
162
-1
-1
163
163
164
163
164
163
-1
164
163
164
163
163
163
164
-1
167
164
166
163
-1
163
164
163
-1
164
165
165
163
164
164
166
163
167
164
164
163
163
169
-1
166
163
167
163
163
163
166
168
165
163
165
165
164
165
163
-1
164
164
179
166
-1
171
179
-1
163
167
168
166
164
165
-1
-1
166
164
164
162
164
167
167
-1
390 695
-1
198
200
201
197
-1
-1
200
199
197
200
197
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Data Format

Here we explain the poses_bounds.npy file format. This file stores a numpy array of size Nx17 (where N is the number of input videos). You can load the data using the following codes.

poses_arr = np.load(os.path.join(basedir, 'poses_bounds.npy'))
poses = poses_arr[:, :-2].reshape([-1, 3, 5]).transpose([1,2,0])
bds = poses_arr[:, -2:].transpose([1,0])

Each row of length 17 gets reshaped into a 3x5 pose matrix and 2 depth values that bound the closest and farthest scene content from that point of view.

The pose matrix is a 3x4 camera-to-world affine transform concatenated with a 3x1 column [image height, image width, focal length] to represent the intrinsics (we assume the principal point is centered and that the focal length is the same for both x and y).

NOTE: In our dataset, the focal length for different cameras are different!!!

The right-handed coordinate system of the the rotation (first 3x3 block in the camera-to-world transform) is as follows: from the point of view of the camera, the three axes are [down, right, backwards] which some people might consider to be [-y,x,z], where the camera is looking along -z. (The more conventional frame [x,y,z] is [right, up, backwards]. The COLMAP frame is [right, down, forwards] or [x,-y,-z].)

We also provide an example of our dataloader in dataloader.py.

Downloads last month
461