Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 4,279 Bytes
273ae33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5606436
 
 
 
 
 
 
 
 
 
 
 
 
 
 
793b7d8
 
 
 
 
 
 
46b48ad
793b7d8
 
 
 
 
 
 
46b48ad
793b7d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46b48ad
 
 
 
 
 
 
 
 
793b7d8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
dataset_info:
  features:
  - name: landmark_id
    dtype: int64
  - name: country_code
    dtype: string
  - name: domestic_language_code
    dtype: string
  - name: language_code
    dtype: string
  - name: landmark_name
    dtype: string
  - name: prompt_idx
    dtype: int64
  splits:
  - name: test
    num_bytes: 470104
    num_examples: 8100
  - name: debug
    num_bytes: 548
    num_examples: 10
  download_size: 80893
  dataset_size: 470652
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
  - split: debug
    path: data/debug-*
license: cc
task_categories:
- text-generation
language:
- ar
- zh
- en
- fr
- de
- it
- ja
- pt
- es
size_categories:
- 1K<n<10K
tags:
- Image
- Text
- Multilingual
---


<a href="https://arxiv.org/abs/2505.15075" target="_blank">
    <img alt="arXiv" src="https://img.shields.io/badge/arXiv-traveling--across--languages-red?logo=arxiv" height="20" />
</a>
<a href="https://github.com/nlp-waseda/traveling-across-languages" target="_blank" style="display: inline-block; margin-right: 10px;">
    <img alt="GitHub Code" src="https://img.shields.io/badge/Code-traveling--across--languages-white?&logo=github&logoColor=white" />
</a>

# VisRecall
This repository contains the VisRecall benchmark, introduced in [Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs](https://arxiv.org/abs/2505.15075). 

## Dataset Description
Imagine a tourist finished their journey in Japan and came back to France, eager to share the places they visited with their friends.
When portraying these experiences, the visual information they convey is inherently independent of language, meaning that descriptions created in different languages should ideally be highly similar.
This concept extends to MLLMs as well.
While a model may demonstrate decent consistency in VQA tasks, any inconsistency in generation tasks would lead to a biased user experience (i.e., a knowing vs saying distinction).
To assess the cross-lingual consistency of "visual memory" in MLLMs, we introduce VisRecall, a multilingual benchmark designed to evaluate visual description generation across 450 landmarks in 9 languages.

The dataset contains the following fields:

| Field Name               | Description                                                                 |
| :----------------------- | :-------------------------------------------------------------------------- |
| `landmark_id`            | Unique identifier for the landmark in the dataset.                         |
| `domestic_language_code` | ISO 639 language code of the official language spoken in the country where the landmark is located. |
| `language_code`          | ISO 639 language code of the prompt.                                       |
| `country_code`           | ISO country code representing the location of the landmark.                |
| `landmark_name`          | Name of the landmark used for evaluation.                                  |
| `prompt_idx`             | Index of the prompt used. Each language includes two distinct prompts.     |

Additionally, the following files are necessary for running evalutaion:
| File Name             | Description                                                             |
| :-------------------- | :---------------------------------------------------------------------- |
| `images.tar.gz`       | Compressed archive containing images of landmarks, used for CLIPScore calculation. |
| `images_list.json`    | List of image file paths included in the dataset.                       |
| `landmark_list.json`  | Metadata for each landmark, including IDs, names, etc. |

## Evaluation
Please refer to our [GitHub repository](https://github.com/nlp-waseda/traveling-across-languages) for detailed information on the evaluation setup.

## Citation

```bibtex
@misc{wang2025travelinglanguagesbenchmarkingcrosslingual,
      title={Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs}, 
      author={Hao Wang and Pinzhi Huang and Jihan Yang and Saining Xie and Daisuke Kawahara},
      year={2025},
      eprint={2505.15075},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.15075}, 
}
```