Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Multimodal Street-level Place Recognition Dataset

Multimodal Street-Level Place Recognition Dataset is a novel, open-access dataset designed to advance research in visual place recognition (VPR) and multimodal urban scene understanding. This dataset focuses on complex, fine-grained, and pedestrian-only urban environments, addressing a significant gap in existing VPR datasets that often rely on vehicle-based imagery from road networks and overlook dense, walkable spaces—especially in non-Western urban contexts.

The dataset was collected within a ~70,800 m² open-air commercial district in Chengdu, China, and consists of:

  • 747 smartphone-recorded videos (1Hz frame extraction),
  • 1,417 manually captured images,
  • 78,581 total images and frames annotated with 207 unique place classes (e.g., street segments, intersections, square).

Each media file includes:

  • Precise GPS metadata (latitude, longitude, altitude),
  • Fine-grained timestamps,
  • Human-verified annotations for class consistency.

The data was collected via a systematic and replicable protocol with multiple camera orientations (north, south, east, west) and spans a full daily cycle from 7:00 AM to 10:00 PM, ensuring diversity in lighting and temporal context (day and night).

The spatial layout of the dataset forms a natural graph structure with:

  • 61 horizontal edges (street segments),
  • 64 vertical edges,
  • 81 nodes (intersections),
  • 1 central square (subgraph).

This makes the dataset suitable for graph-based learning tasks such as GNN-based reasoning, and for multimodal, spatiotemporal, and structure-aware inference.

We also provide two targeted subsets:

  • Sub-Dataset_Edges (125 classes): horizontal and vertical street segments.
  • Sub-Dataset_Points (82 classes): intersections and the square.

This dataset demonstrates that high-quality Place Recognition datasets can be constructed using widely available smartphones when paired with a scientifically designed data collection framework—lowering the barrier to entry for future dataset development.

Dataset Structure

The dataset is organized into three main folders:

01. Raw_Files (approximately 90 GB, 2,164 files)

This folder contains the original raw data collected in an urban district. It includes:

  • Photos/: Over 1,400 high-resolution photographs
  • Videos/: Over 700 videos recorded using handheld mobile cameras

These are unprocessed, original media files. Resolutions include:

  • Image: 4032 × 3024
  • Video: 1920 × 1080

02. Annotated_Original (approximately 38 GB, 162,186 files)

This folder contains the annotated version of the dataset. Videos have been sampled at 1 frame per second (1 Hz), and all images and video frames are manually labeled with place tags and descriptive metadata.

Subfolders:

  • Dataset_Full/: The complete version of the dataset
  • Sub-Dataset_Edges/: A subset containing only edge spaces (street segments)
  • Sub-Dataset_Points/: A subset containing node spaces (intersections) and squares

Each dataset variant contains three modalities:

  • Images/: High-resolution image files and video frames
  • Videos/: High-resolution video clips
  • Texts/: Text files containing annotations and metadata

Subfolder structure in Images/ and Videos/ in Dataset_Full/:

  • Edge (horizontal)
  • Edge (vertical)
  • Node
  • Square

Subfolder structure in Images/ and Videos/ in Dataset_Edges/:

  • Edge (horizontal)
  • Edge (vertical)

Subfolder structure in Images/ and Videos/ in Dataset_Points/:

  • Node
  • Square

Each of these contains multiple subfolders named after spatial location codes (e.g., Eh-1-1, N-2-3), which correspond to place labels used for classification. These labels can be mapped to numeric indices.

Text files in Texts/:

  • Annotations.xlsx: Place labels, spatial types, map locations, shop names, signage text, and label indices
  • Media_Metadata-Images.xlsx: Metadata for each image
  • Media_Metadata-Videos.xlsx: Metadata for each video

In addition, each dataset variant contains a visualization map (e.g., Dataset_Full Map.png inDataset_Full/) demonstrating the real-world geolocations and relations between all the places in the target urban district, which also indicates the inherent graph structure of the proposed dataset.

03. Annotated_Resized (approximately 4 GB, 162,186 files)

This is a downscaled version of the annotated dataset, identical in structure to Annotated_Original. All image and video frame resolutions are reduced:

  • Original images (4032×3024) are resized to 256×192
  • Video frames (1920×1080) are resized to 256×144

Aspect ratios are preserved. This version is recommended for faster training and experimentation.

File Download and Reconstruction

Due to Hugging Face's file size limits, the dataset is split into multiple compressed files:

  • Raw_Files.part01.tar.gz
  • Raw_Files.part02.tar.gz
  • Raw_Files.part03.tar.gz
  • Annotated_Original.tar.gz
  • Annotated_Resized.tar.gz

To reconstruct the raw files:

cat Raw_Files.part*.tar.gz > Raw_Files.tar.gz
tar -xzvf Raw_Files.tar.gz

Usage

# Download the resized version (recommended)
wget https://huggingface.co/datasets/Yiwei-Ou/Multimodal_Street-level_Place_Recognition_Dataset/resolve/main/Annotated_Resized.tar.gz
tar -xzvf Annotated_Resized.tar.gz

Researchers can directly train models using the contents of Dataset_Full, which includes aligned image, text, and video modalities. For efficient training, the resized version is usually sufficient. For high-fidelity testing or custom processing, use the full-resolution version or raw files.

Dataset Summary

Dataset Version Size File Count
Raw Files ~90 GB 2,164
Annotated_Original ~38 GB 162,186
Annotated_Resized ~4 GB 162,186

License

This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

Citation

If you use this dataset in your research, please cite our NeurIPS 2025 Datasets and Benchmarks submission (citation to be added upon acceptance).

Contact

Author: Yiwei Ou
Email: [email protected]
Hugging Face: https://huggingface.co/Yiwei-Ou

Downloads last month
47