Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
VisRecall / README.md
conan1024hao's picture
Update README.md
46b48ad verified
metadata
dataset_info:
  features:
    - name: landmark_id
      dtype: int64
    - name: country_code
      dtype: string
    - name: domestic_language_code
      dtype: string
    - name: language_code
      dtype: string
    - name: landmark_name
      dtype: string
    - name: prompt_idx
      dtype: int64
  splits:
    - name: test
      num_bytes: 470104
      num_examples: 8100
    - name: debug
      num_bytes: 548
      num_examples: 10
  download_size: 80893
  dataset_size: 470652
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: debug
        path: data/debug-*
license: cc
task_categories:
  - text-generation
language:
  - ar
  - zh
  - en
  - fr
  - de
  - it
  - ja
  - pt
  - es
size_categories:
  - 1K<n<10K
tags:
  - Image
  - Text
  - Multilingual
arXiv GitHub Code

VisRecall

This repository contains the VisRecall benchmark, introduced in Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs.

Dataset Description

Imagine a tourist finished their journey in Japan and came back to France, eager to share the places they visited with their friends. When portraying these experiences, the visual information they convey is inherently independent of language, meaning that descriptions created in different languages should ideally be highly similar. This concept extends to MLLMs as well. While a model may demonstrate decent consistency in VQA tasks, any inconsistency in generation tasks would lead to a biased user experience (i.e., a knowing vs saying distinction). To assess the cross-lingual consistency of "visual memory" in MLLMs, we introduce VisRecall, a multilingual benchmark designed to evaluate visual description generation across 450 landmarks in 9 languages.

The dataset contains the following fields:

Field Name Description
landmark_id Unique identifier for the landmark in the dataset.
domestic_language_code ISO 639 language code of the official language spoken in the country where the landmark is located.
language_code ISO 639 language code of the prompt.
country_code ISO country code representing the location of the landmark.
landmark_name Name of the landmark used for evaluation.
prompt_idx Index of the prompt used. Each language includes two distinct prompts.

Additionally, the following files are necessary for running evalutaion:

File Name Description
images.tar.gz Compressed archive containing images of landmarks, used for CLIPScore calculation.
images_list.json List of image file paths included in the dataset.
landmark_list.json Metadata for each landmark, including IDs, names, etc.

Evaluation

Please refer to our GitHub repository for detailed information on the evaluation setup.

Citation

@misc{wang2025travelinglanguagesbenchmarkingcrosslingual,
      title={Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs}, 
      author={Hao Wang and Pinzhi Huang and Jihan Yang and Saining Xie and Daisuke Kawahara},
      year={2025},
      eprint={2505.15075},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.15075}, 
}