File size: 3,338 Bytes
a9f3add 6fbe9f7 a9f3add c2dc7c7 a9f3add |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
license: mit
language:
- en
tags:
- dataset
- document-processing
- multimodal
- vision-language
- information-retrieval
---
# π Banque_Vision: A Multimodal Dataset for Document Understanding
## π Overview
**Banque_Vision** is a **multimodal dataset** designed for **document-based question answering (QA) and information retrieval**. It combines **textual data** and **visual document representations**, enabling research on **how vision models and language models** interact for document comprehension.
π **Created by**: Matteo Khan
π **Affiliation**: TW3Partners
π **License**: MIT
π [Connect with me on LinkedIn](https://www.linkedin.com/in/matteo-khan-a10309263/)
π [Dataset on Hugging Face](https://huggingface.co/datasets/YourProfile/banque_vision)
## π Dataset Structure
- **Document Text**: The full text of the document related to the query.
- **Query**: The question or request for information.
- **Document Page**: The specific page containing the answer.
- **Document Image**: The visual representation (scan or screenshot) of the document page.
This dataset allows models to process and retrieve information across both textual and visual modalities, making it highly relevant for **document AI research**.
## π― Intended Use
This dataset is designed for:
- β
**Document-based QA** (e.g., answering questions based on scanned documents)
- β
**Information retrieval** from structured/unstructured sources
- β
**Multimodal learning** for combining text and vision-based features
- β
**OCR-based research** and benchmarking
- β
**Fine-tuning vision-language models** like Donut, LayoutLM, and BLIP
## β οΈ Limitations & Considerations
While **Banque_Vision** is a powerful resource, users should be aware of:
- β **OCR errors**: Text extraction may be imperfect due to document quality.
- β οΈ **Bias in document sources**: Some domains may be over- or under-represented.
- π **Data labeling noise**: Possible inaccuracies in question-answer alignment.
## π Dataset Format
The dataset is stored in **JSONL** format with the following structure:
```json
{
"document_text": "... The standard interest rate for savings accounts is 2.5% ...",
"document_page": 5,
"query": "What is the interest rate for savings accounts?",
"document_image": "path/to/image.jpg",
}
```
## π How to Use
```python
from datasets import load_dataset
dataset = load_dataset("YourProfile/banque_vision")
# Example
sample = dataset["train"][0]
print("Query:", sample["query"])
```
## π Why It Matters
- **Bridges the gap** between text and vision-based document processing.
- **Supports real-world applications** like legal document analysis, financial records processing, and automated document retrieval.
- **Encourages innovation** in hybrid models that combine **LLMs with vision transformers**.
## π Citation
```bibtex
@misc{banquevision2025,
title={Banque_Vision: A Multimodal Dataset for Document Understanding},
author={Your Name},
year={2025},
eprint={arXiv:XXXX.XXXXX},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
π© **Feedback & Contributions**: Feel free to collaborate or provide feedback via [Hugging Face](https://huggingface.co/datasets/YourProfile/banque_vision).
π **Happy Researching!** π
|