Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -44,4 +44,53 @@ language:
|
|
44 |
- es
|
45 |
size_categories:
|
46 |
- 1K<n<10K
|
47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
- es
|
45 |
size_categories:
|
46 |
- 1K<n<10K
|
47 |
+
tags:
|
48 |
+
- Image
|
49 |
+
- Text
|
50 |
+
- Multilingual
|
51 |
+
---
|
52 |
+
|
53 |
+
|
54 |
+
<a href="FIXME" target="_blank">
|
55 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-traveling--across--languages-red?logo=arxiv" height="20" />
|
56 |
+
</a>
|
57 |
+
<a href="https://github.com/nlp-waseda/traveling-across-languages" target="_blank" style="display: inline-block; margin-right: 10px;">
|
58 |
+
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-traveling--across--languages-white?&logo=github&logoColor=white" />
|
59 |
+
</a>
|
60 |
+
|
61 |
+
# VisRecall
|
62 |
+
This repository contains the VisRecall benchmark, introduced in [Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs](FIXME).
|
63 |
+
|
64 |
+
## Dataset Description
|
65 |
+
Imagine a tourist finished their journey in Japan and came back to France, eager to share the places they visited with their friends.
|
66 |
+
When portraying these experiences, the visual information they convey is inherently independent of language, meaning that descriptions created in different languages should ideally be highly similar.
|
67 |
+
This concept extends to MLLMs as well.
|
68 |
+
While a model may demonstrate decent consistency in VQA tasks, any inconsistency in generation tasks would lead to a biased user experience (i.e., a knowing vs saying distinction).
|
69 |
+
To assess the cross-lingual consistency of "visual memory" in MLLMs, we introduce VisRecall, a multilingual benchmark designed to evaluate visual description generation across 450 landmarks in 9 languages.
|
70 |
+
|
71 |
+
The dataset contains the following fields:
|
72 |
+
|
73 |
+
| Field Name | Description |
|
74 |
+
| :----------------------- | :-------------------------------------------------------------------------- |
|
75 |
+
| `landmark_id` | Unique identifier for the landmark in the dataset. |
|
76 |
+
| `domestic_language_code` | ISO 639 language code of the official language spoken in the country where the landmark is located. |
|
77 |
+
| `language_code` | ISO 639 language code of the prompt. |
|
78 |
+
| `country_code` | ISO country code representing the location of the landmark. |
|
79 |
+
| `landmark_name` | Name of the landmark used for evaluation. |
|
80 |
+
| `prompt_idx` | Index of the prompt used. Each language includes two distinct prompts. |
|
81 |
+
|
82 |
+
Additionally, the following files are necessary for running evalutaion:
|
83 |
+
| File Name | Description |
|
84 |
+
| :-------------------- | :---------------------------------------------------------------------- |
|
85 |
+
| `images.tar.gz` | Compressed archive containing images of landmarks, used for CLIPScore calculation. |
|
86 |
+
| `images_list.json` | List of image file paths included in the dataset. |
|
87 |
+
| `landmark_list.json` | Metadata for each landmark, including IDs, names, etc. |
|
88 |
+
|
89 |
+
## Evaluation
|
90 |
+
Please refer to our [GitHub repository](https://github.com/nlp-waseda/traveling-across-languages) for detailed information on the evaluation setup.
|
91 |
+
|
92 |
+
## Citation
|
93 |
+
|
94 |
+
```bibtex
|
95 |
+
FIXME
|
96 |
+
```
|