Datasets:
annotations_creators:
- human-annotated
language_creators:
- found
language: en
license: cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'MMS-VPR: Multimodal Street-Level Visual Place Recognition Dataset'
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- image-classification
- text-retrieval
task_ids:
- multi-class-image-classification
tags:
- Place Recognition
MMS-VPR: Multimodal Street-Level Visual Place Recognition Dataset
Multimodal Street-Level Visual Place Recognition Dataset (MMS-VPR) is a novel, open-access dataset designed to advance research in visual place recognition (VPR) and multimodal urban scene understanding. This dataset focuses on complex, fine-grained, and pedestrian-only urban environments, addressing a significant gap in existing VPR datasets that often rely on vehicle-based imagery from road networks and overlook dense, walkable spaces—especially in non-Western urban contexts.
The dataset was collected within a ~70,800 m² open-air commercial district in Chengdu, China, and consists of:
- 747 smartphone-recorded videos (1Hz frame extraction),
- 1,417 manually captured images,
- 78,581 total images and frames annotated with 207 unique place classes (e.g., street segments, intersections, square).
Each media file includes:
- Precise GPS metadata (latitude, longitude, altitude),
- Fine-grained timestamps,
- Human-verified annotations for class consistency.
The data was collected via a systematic and replicable protocol with multiple camera orientations (north, south, east, west) and spans a full daily cycle from 7:00 AM to 10:00 PM, ensuring diversity in lighting and temporal context (day and night).
The spatial layout of the dataset forms a natural graph structure with:
- 61 horizontal edges (street segments),
- 64 vertical edges,
- 81 nodes (intersections),
- 1 central square (subgraph).
This makes the dataset suitable for graph-based learning tasks such as GNN-based reasoning, and for multimodal, spatiotemporal, and structure-aware inference.
We also provide two targeted subsets:
- Sub-Dataset_Edges (125 classes): horizontal and vertical street segments.
- Sub-Dataset_Points (82 classes): intersections and the square.
This dataset demonstrates that high-quality Place Recognition datasets can be constructed using widely available smartphones when paired with a scientifically designed data collection framework—lowering the barrier to entry for future dataset development.
Dataset Structure
The dataset is organized into three main folders:
01. Raw_Files (approximately 90 GB, 2,164 files)
This folder contains the original raw data collected in an urban district. It includes:
Photos/
: Over 1,400 high-resolution photographsVideos/
: Over 700 videos recorded using handheld mobile cameras
These are unprocessed, original media files. Resolutions include:
- Image: 4032 × 3024
- Video: 1920 × 1080
02. Annotated_Original (approximately 38 GB, 162,186 files)
This folder contains the annotated version of the dataset. Videos have been sampled at 1 frame per second (1 Hz), and all images and video frames are manually labeled with place tags and descriptive metadata.
Subfolders:
Dataset_Full/
: The complete version of the datasetSub-Dataset_Edges/
: A subset containing only edge spaces (street segments)Sub-Dataset_Points/
: A subset containing node spaces (intersections) and squares
Each dataset variant contains three modalities:
Images/
: High-resolution image files and video framesVideos/
: High-resolution video clipsTexts/
: Text files containing annotations and metadata
Subfolder structure in Images/
and Videos/
in Dataset_Full/
:
Edge (horizontal)
Edge (vertical)
Node
Square
Subfolder structure in Images/
and Videos/
in Dataset_Edges/
:
Edge (horizontal)
Edge (vertical)
Subfolder structure in Images/
and Videos/
in Dataset_Points/
:
Node
Square
Each of these contains multiple subfolders named after spatial location codes (e.g., Eh-1-1
, N-2-3
), which correspond to place labels used for classification. These labels can be mapped to numeric indices.
Text files in Texts/
:
Annotations.xlsx
: Place labels, spatial types, map locations, shop names, signage text, and label indicesMedia_Metadata-Images.xlsx
: Metadata for each imageMedia_Metadata-Videos.xlsx
: Metadata for each video
In addition, each dataset variant contains a visualization map (e.g., Dataset_Full Map.png
inDataset_Full/
) demonstrating the real-world geolocations and relations between all the places in the target urban district, which also indicates the inherent graph structure of the proposed dataset.
03. Annotated_Resized (approximately 4 GB, 162,186 files)
This is a downscaled version of the annotated dataset, identical in structure to Annotated_Original. All image and video frame resolutions are reduced:
- Original images (4032×3024) are resized to 256×192
- Video frames (1920×1080) are resized to 256×144
Aspect ratios are preserved. This version is recommended for faster training and experimentation.
File Download and Reconstruction
Due to Hugging Face's file size limits, the dataset is split into multiple compressed files:
Raw_Files.part01.tar.gz
Raw_Files.part02.tar.gz
Raw_Files.part03.tar.gz
Annotated_Original.tar.gz
Annotated_Resized.tar.gz
To reconstruct the raw files:
cat Raw_Files.part*.tar.gz > Raw_Files.tar.gz
tar -xzvf Raw_Files.tar.gz
Usage
# Download the resized version (recommended)
wget https://huggingface.co/datasets/Yiwei-Ou/MMS-VPR/resolve/main/Annotated_Resized.tar.gz
tar -xzvf Annotated_Resized.tar.gz
Researchers can directly train models using the contents of Dataset_Full
, which includes aligned image, text, and video modalities. For efficient training, the resized version is usually sufficient. For high-fidelity testing or custom processing, use the full-resolution version or raw files.
Dataset Summary
Dataset Version | Size | File Count |
---|---|---|
Raw Files | ~90 GB | 2,164 |
Annotated_Original | ~38 GB | 162,186 |
Annotated_Resized | ~4 GB | 162,186 |
License
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Citation
If you use this dataset in your research, please cite:
@article{ou2025mmsvpr,
title = {MMS-VPR: Multimodal Street-Level Visual Place Recognition Dataset and Benchmark},
author = {Ou, Yiwei and Ren, Xiaobin and Sun, Ronggui and Gao, Guansong and Jiang, Ziyi and Zhao, Kaiqi and Manfredini, Manfredo},
journal = {arXiv preprint arXiv:2505.12254},
year = {2025},
url = {https://arxiv.org/abs/2505.12254}
}