Update README.md
Browse files
README.md
CHANGED
@@ -62,3 +62,77 @@ configs:
|
|
62 |
- split: test
|
63 |
path: queries/test-*
|
64 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
- split: test
|
63 |
path: queries/test-*
|
64 |
---
|
65 |
+
|
66 |
+
# Vidore Benchmark 2 - ESG Human Labeled
|
67 |
+
|
68 |
+
This dataset is part of the "Vidore Benchmark 2" collection, designed for evaluating visual retrieval applications. It focuses on the theme of **ESG reports from the fast food industry**.
|
69 |
+
|
70 |
+
## Dataset Summary
|
71 |
+
|
72 |
+
Each query is in english.
|
73 |
+
|
74 |
+
This dataset provides a focused benchmark for visual retrieval tasks related to ESG reports for the fast food industry. It includes a curated set of documents, queries, relevance judgments (qrels), and page images.
|
75 |
+
This dataset was fully labelled by hand, has no overlap of queries with its synthetic counterpart (available [here](https://huggingface.co/datasets/vidore/synthetic_rse_restaurant_filtered_v1.0))
|
76 |
+
|
77 |
+
* **Number of Documents:** 27
|
78 |
+
* **Number of Queries:** 52
|
79 |
+
* **Number of Pages:** 1538
|
80 |
+
* **Number of Relevance Judgments (qrels):** 128
|
81 |
+
* **Average Number of Pages per Query:** 2.5
|
82 |
+
|
83 |
+
## Dataset Structure (Hugging Face Datasets)
|
84 |
+
The dataset is structured into the following columns:
|
85 |
+
|
86 |
+
* **`corpus`**: Contains page-level information:
|
87 |
+
* `"image"`: The image of the page (a PIL Image object).
|
88 |
+
* `"corpus-id"`: A unique identifier for this specific page within the corpus.
|
89 |
+
* **`queries`**: Contains query information:
|
90 |
+
* `"query-id"`: A unique identifier for the query.
|
91 |
+
* `"query"`: The text of the query.
|
92 |
+
* **`qrels`**: Contains relevance judgments:
|
93 |
+
* `"corpus-id"`: The ID of the relevant page.
|
94 |
+
* `"query-id"`: The ID of the query.
|
95 |
+
* `"answer"`: Answer relevant to the query AND the page.
|
96 |
+
* `"score"`: The relevance score.
|
97 |
+
|
98 |
+
|
99 |
+
## Usage
|
100 |
+
|
101 |
+
This dataset is designed for evaluating the performance of visual retrieval systems, particularly those focused on document image understanding.
|
102 |
+
|
103 |
+
**Example Evaluation with ColPali (CLI):**
|
104 |
+
|
105 |
+
Here's a code snippet demonstrating how to evaluate the ColPali model on this dataset using the `vidore-benchmark` command-line tool.
|
106 |
+
|
107 |
+
1. **Install the `vidore-benchmark` package:**
|
108 |
+
|
109 |
+
```bash
|
110 |
+
pip install vidore-benchmark datasets
|
111 |
+
```
|
112 |
+
|
113 |
+
2. **Run the evaluation:**
|
114 |
+
|
115 |
+
```bash
|
116 |
+
vidore-benchmark evaluate-retriever \
|
117 |
+
--model-class colpali \
|
118 |
+
--model-name vidore/colpali-v1.3 \
|
119 |
+
--dataset-name vidore/restaurant_esg_reports_beir \
|
120 |
+
--dataset-format beir \
|
121 |
+
--split test
|
122 |
+
```
|
123 |
+
|
124 |
+
For more details on using `vidore-benchmark`, refer to the official documentation: [https://github.com/illuin-tech/vidore-benchmark](https://github.com/illuin-tech/vidore-benchmark)
|
125 |
+
|
126 |
+
## Citation
|
127 |
+
|
128 |
+
If you use this dataset in your research or work, please cite:
|
129 |
+
|
130 |
+
#INSERT CITATION
|
131 |
+
|
132 |
+
## License
|
133 |
+
|
134 |
+
#INSERT LICENSE
|
135 |
+
|
136 |
+
## Acknowledgments
|
137 |
+
|
138 |
+
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.
|