|
--- |
|
license: mit |
|
language: |
|
- en |
|
tags: |
|
- dataset |
|
- document-processing |
|
- multimodal |
|
- vision-language |
|
- information-retrieval |
|
|
|
--- |
|
|
|
# π Banque_Vision: A Multimodal Dataset for Document Understanding |
|
|
|
## π Overview |
|
**Banque_Vision** is a **multimodal dataset** designed for **document-based question answering (QA) and information retrieval**. It combines **textual data** and **visual document representations**, enabling research on **how vision models and language models** interact for document comprehension. |
|
|
|
π **Created by**: Matteo Khan |
|
π **Affiliation**: TW3Partners |
|
π **License**: MIT |
|
|
|
π [Connect with me on LinkedIn](https://www.linkedin.com/in/matteo-khan-a10309263/) |
|
π [Dataset on Hugging Face](https://huggingface.co/datasets/YourProfile/banque_vision) |
|
|
|
## π Dataset Structure |
|
- **Document Text**: The full text of the document related to the query. |
|
- **Query**: The question or request for information. |
|
- **Document Page**: The specific page containing the answer. |
|
- **Document Image**: The visual representation (scan or screenshot) of the document page. |
|
|
|
This dataset allows models to process and retrieve information across both textual and visual modalities, making it highly relevant for **document AI research**. |
|
|
|
## π― Intended Use |
|
This dataset is designed for: |
|
- β
**Document-based QA** (e.g., answering questions based on scanned documents) |
|
- β
**Information retrieval** from structured/unstructured sources |
|
- β
**Multimodal learning** for combining text and vision-based features |
|
- β
**OCR-based research** and benchmarking |
|
- β
**Fine-tuning vision-language models** like Donut, LayoutLM, and BLIP |
|
|
|
## β οΈ Limitations & Considerations |
|
While **Banque_Vision** is a powerful resource, users should be aware of: |
|
- β **OCR errors**: Text extraction may be imperfect due to document quality. |
|
- β οΈ **Bias in document sources**: Some domains may be over- or under-represented. |
|
- π **Data labeling noise**: Possible inaccuracies in question-answer alignment. |
|
|
|
## π Dataset Format |
|
The dataset is stored in **JSONL** format with the following structure: |
|
|
|
```json |
|
{ |
|
"document_text": "... The standard interest rate for savings accounts is 2.5% ...", |
|
"document_page": 5, |
|
"query": "What is the interest rate for savings accounts?", |
|
"document_image": "path/to/image.jpg", |
|
} |
|
``` |
|
|
|
## π How to Use |
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("YourProfile/banque_vision") |
|
|
|
# Example |
|
sample = dataset["train"][0] |
|
print("Query:", sample["query"]) |
|
``` |
|
|
|
## π Why It Matters |
|
- **Bridges the gap** between text and vision-based document processing. |
|
- **Supports real-world applications** like legal document analysis, financial records processing, and automated document retrieval. |
|
- **Encourages innovation** in hybrid models that combine **LLMs with vision transformers**. |
|
|
|
## π Citation |
|
```bibtex |
|
@misc{banquevision2025, |
|
title={Banque_Vision: A Multimodal Dataset for Document Understanding}, |
|
author={Your Name}, |
|
year={2025}, |
|
eprint={arXiv:XXXX.XXXXX}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
π© **Feedback & Contributions**: Feel free to collaborate or provide feedback via [Hugging Face](https://huggingface.co/datasets/YourProfile/banque_vision). |
|
|
|
π **Happy Researching!** π |
|
|
|
|