Datasets:

Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
QuentinJG's picture
Update README.md
d32a5dd verified
---
dataset_info:
- config_name: corpus
features:
- name: corpus-id
dtype: int64
- name: image
dtype: image
- name: pdf_url
dtype: string
- name: company
dtype: string
- name: date
dtype: string
splits:
- name: test
num_bytes: 842829685.81
num_examples: 1538
download_size: 761076653
dataset_size: 842829685.81
- config_name: qrels
features:
- name: query-id
dtype: int64
- name: corpus-id
dtype: int64
- name: score
dtype: int64
splits:
- name: test
num_bytes: 3072
num_examples: 128
download_size: 2521
dataset_size: 3072
- config_name: queries
features:
- name: query-id
dtype: int64
- name: query
dtype: string
- name: source_type
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 35047
num_examples: 52
download_size: 23714
dataset_size: 35047
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: qrels
data_files:
- split: test
path: qrels/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
---
# Vidore Benchmark 2 - ESG Human Labeled
This dataset is part of the "Vidore Benchmark 2" collection, designed for evaluating visual retrieval applications. It focuses on the theme of **ESG reports from the fast food industry**.
## Dataset Summary
Each query is in english.
This dataset provides a focused benchmark for visual retrieval tasks related to ESG reports for the fast food industry. It includes a curated set of documents, queries, relevance judgments (qrels), and page images.
This dataset was fully labelled by hand, has no overlap of queries with its synthetic counterpart (available [here](https://huggingface.co/datasets/vidore/synthetic_rse_restaurant_filtered_v1.0))
* **Number of Documents:** 27
* **Number of Queries:** 52
* **Number of Pages:** 1538
* **Number of Relevance Judgments (qrels):** 128
* **Average Number of Pages per Query:** 2.5
## Dataset Structure (Hugging Face Datasets)
The dataset is structured into the following columns:
* **`corpus`**: Contains page-level information:
* `"image"`: The image of the page (a PIL Image object).
* `"corpus-id"`: A unique identifier for this specific page within the corpus.
* **`queries`**: Contains query information:
* `"query-id"`: A unique identifier for the query.
* `"query"`: The text of the query.
* **`qrels`**: Contains relevance judgments:
* `"corpus-id"`: The ID of the relevant page.
* `"query-id"`: The ID of the query.
* `"answer"`: Answer relevant to the query AND the page.
* `"score"`: The relevance score.
## Usage
This dataset is designed for evaluating the performance of visual retrieval systems, particularly those focused on document image understanding.
**Example Evaluation with ColPali (CLI):**
Here's a code snippet demonstrating how to evaluate the ColPali model on this dataset using the `vidore-benchmark` command-line tool.
1. **Install the `vidore-benchmark` package:**
```bash
pip install vidore-benchmark datasets
```
2. **Run the evaluation:**
```bash
vidore-benchmark evaluate-retriever \
--model-class colpali \
--model-name vidore/colpali-v1.3 \
--dataset-name vidore/restaurant_esg_reports_beir \
--dataset-format beir \
--split test
```
For more details on using `vidore-benchmark`, refer to the official documentation: [https://github.com/illuin-tech/vidore-benchmark](https://github.com/illuin-tech/vidore-benchmark)
## Citation
If you use this dataset in your research or work, please cite:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
```
## Acknowledgments
This work is partially supported by [ILLUIN Technology](https://www.illuin.tech/), and by a grant from ANRT France.