Update README
Browse files
README.md
CHANGED
@@ -1,3 +1,85 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
|
7 |
+
---
|
8 |
+
license: mit
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
metrics:
|
12 |
+
- recall
|
13 |
+
base_model:
|
14 |
+
- Qwen/Qwen2-VL-2B-Instruct
|
15 |
+
library_name: transformers == 4.45.2
|
16 |
+
---
|
17 |
+
|
18 |
+
<h1 align="center">Vis-IR: Unifying Search With Visualized Information Retrieval</h1>
|
19 |
+
|
20 |
+
<p align="center">
|
21 |
+
<a href="https://arxiv.org/abs/2502.11431">
|
22 |
+
<img alt="Build" src="http://img.shields.io/badge/arXiv-2502.11431-B31B1B.svg">
|
23 |
+
</a>
|
24 |
+
<a href="https://github.com/VectorSpaceLab/Vis-IR">
|
25 |
+
<img alt="Build" src="https://img.shields.io/badge/Github-Code-blue">
|
26 |
+
</a>
|
27 |
+
<a href="https://huggingface.co/datasets/marsh123/VIRA/">
|
28 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Datasets-VIRA-yellow">
|
29 |
+
</a>
|
30 |
+
<a href="">
|
31 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Datasets-MVRB-yellow">
|
32 |
+
</a>
|
33 |
+
<!-- <a href="">
|
34 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Model-UniSE CLIP-yellow">
|
35 |
+
</a> -->
|
36 |
+
<a href="https://huggingface.co/marsh123/UniSE">
|
37 |
+
<img alt="Build" src="https://img.shields.io/badge/π€ Model-UniSE MLLM-yellow">
|
38 |
+
</a>
|
39 |
+
|
40 |
+
</p>
|
41 |
+
|
42 |
+
## Overview
|
43 |
+
|
44 |
+
**VIRA** (Vis-IR Aggregation), a large-scale dataset comprising a vast collection of screenshots from diverse sources, carefully curated into captioned and questionanswer formats.
|
45 |
+
|
46 |
+
## Statistics
|
47 |
+
|
48 |
+
There are three types of data in VIRA: caption data, query-to-screenshot (q2s) data, and screenshot+query-to-screenshot (sq2s) data. The table below provides a detailed breakdown of the data counts for each domain and type.
|
49 |
+
|
50 |
+

|
51 |
+
|
52 |
+
|
53 |
+
## Organization Structure
|
54 |
+
|
55 |
+
The dataset is organized in the following structure:
|
56 |
+
|
57 |
+
```tree
|
58 |
+
Domain/
|
59 |
+
βββ caption.jsonl: a screenshot image path and its corresponding caption
|
60 |
+
βββ q2s.jsonl: a query, a positive screenshot and eight negative screenshots
|
61 |
+
βββ sq2s.jsonl: a query, a query screenshot, a positive screenshot and eight negative screenshots
|
62 |
+
βββ images/
|
63 |
+
βββ image1.jpg
|
64 |
+
βββ image2.jpg
|
65 |
+
...
|
66 |
+
```
|
67 |
+
_Due to the large number of images, uploading all of them takes time. Currently, the upload is not yet complete, and we will continue the process._
|
68 |
+
|
69 |
+
## License
|
70 |
+
VIRA is licensed under the [MIT License](LICENSE).
|
71 |
+
|
72 |
+
|
73 |
+
## Citation
|
74 |
+
If you find this dataset useful, please cite:
|
75 |
+
|
76 |
+
```
|
77 |
+
@article{liu2025any,
|
78 |
+
title={Any Information Is Just Worth One Single Screenshot: Unifying Search With Visualized Information Retrieval},
|
79 |
+
author={Liu, Ze and Liang, Zhengyang and Zhou, Junjie and Liu, Zheng and Lian, Defu},
|
80 |
+
journal={arXiv preprint arXiv:2502.11431},
|
81 |
+
year={2025}
|
82 |
+
}
|
83 |
+
```
|
84 |
+
|
85 |
+
|