--- license: mit language: - en ---

Vis-IR: Unifying Search With Visualized Information Retrieval

Build Build Build Build Build

## Overview **VIRA** (Vis-IR Aggregation), a large-scale dataset comprising a vast collection of screenshots from diverse sources, carefully curated into captioned and questionanswer formats. ## Statistics There are three types of data in VIRA: caption data, query-to-screenshot (q2s) data, and screenshot+query-to-screenshot (sq2s) data. The table below provides a detailed breakdown of the data counts for each domain and type. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66164f6245336ca774679611/EXXiP6zykuQunrx30hwBt.png) ## Organization Structure The dataset is organized in the following structure: ```tree Domain/ ├── caption.jsonl: a screenshot image path and its corresponding caption ├── q2s.jsonl: a query, a positive screenshot and eight negative screenshots ├── sq2s.jsonl: a query, a query screenshot, a positive screenshot and eight negative screenshots └── images/ ├── image1.jpg ├── image2.jpg ... ``` _Due to the large number of images, uploading all of them takes time. Currently, the upload is not yet complete, and we will continue the process._ ## License VIRA is licensed under the [MIT License](LICENSE). ## Citation If you find this dataset useful, please cite: ``` @article{liu2025any, title={Any Information Is Just Worth One Single Screenshot: Unifying Search With Visualized Information Retrieval}, author={Liu, Ze and Liang, Zhengyang and Zhou, Junjie and Liu, Zheng and Lian, Defu}, journal={arXiv preprint arXiv:2502.11431}, year={2025} } ```