Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
VIRA / README.md
marsh123's picture
Update README.md
eda9c4f verified
metadata
license: mit
language:
  - en

Vis-IR: Unifying Search With Visualized Information Retrieval

Build Build Build Build Build

Overview

VIRA (Vis-IR Aggregation), a large-scale dataset comprising a vast collection of screenshots from diverse sources, carefully curated into captioned and questionanswer formats.

Statistics

There are three types of data in VIRA: caption data, query-to-screenshot (q2s) data, and screenshot+query-to-screenshot (sq2s) data. The table below provides a detailed breakdown of the data counts for each domain and type.

image/png

Organization Structure

The dataset is organized in the following structure:

Domain/  
β”œβ”€β”€ caption.jsonl: a screenshot image path and its corresponding caption 
β”œβ”€β”€ q2s.jsonl: a query, a positive screenshot and eight negative screenshots
β”œβ”€β”€ sq2s.jsonl: a query, a query screenshot, a positive screenshot and eight negative screenshots  
└── images/  
    β”œβ”€β”€ image1.jpg  
    β”œβ”€β”€ image2.jpg  
    ...

Due to the large number of images, uploading all of them takes time. Currently, the upload is not yet complete, and we will continue the process.

License

VIRA is licensed under the MIT License.

Citation

If you find this dataset useful, please cite:

@article{liu2025any,
  title={Any Information Is Just Worth One Single Screenshot: Unifying Search With Visualized Information Retrieval},
  author={Liu, Ze and Liang, Zhengyang and Zhou, Junjie and Liu, Zheng and Lian, Defu},
  journal={arXiv preprint arXiv:2502.11431},
  year={2025}
}