|
--- |
|
task_categories: |
|
- image-to-text |
|
- visual-question-answering |
|
tags: |
|
- ocr |
|
- document-analysis |
|
- multilingual |
|
- vqa |
|
- webdataset |
|
size_categories: |
|
- 100K<n<1M |
|
configs: |
|
- config_name: ar |
|
data_files: |
|
- split: train |
|
path: ar/*.tar |
|
- config_name: de |
|
data_files: |
|
- split: train |
|
path: de/*.tar |
|
- config_name: es |
|
data_files: |
|
- split: train |
|
path: es/*.tar |
|
- config_name: fr |
|
data_files: |
|
- split: train |
|
path: fr/*.tar |
|
- config_name: it |
|
data_files: |
|
- split: train |
|
path: it/*.tar |
|
- config_name: ja |
|
data_files: |
|
- split: train |
|
path: ja/*.tar |
|
- config_name: ko |
|
data_files: |
|
- split: train |
|
path: ko/*.tar |
|
- config_name: ru |
|
data_files: |
|
- split: train |
|
path: ru/*.tar |
|
- config_name: sa |
|
data_files: |
|
- split: train |
|
path: sa/*.tar |
|
- config_name: th |
|
data_files: |
|
- split: train |
|
path: th/*.tar |
|
- config_name: zh |
|
data_files: |
|
- split: train |
|
path: zh/*.tar |
|
--- |
|
|
|
# Nayana-DocOCR Global Annotated Dataset |
|
|
|
## Dataset Description |
|
|
|
This is a large-scale multilingual document OCR dataset containing approximately 400GB of images with comprehensive annotations across multiple global languages and English. The dataset is stored in **WebDataset** format using TAR archives for efficient streaming and processing. |
|
|
|
### Available Language Subsets |
|
- **Arabic** (ar): Available |
|
- **German** (de): Available |
|
- **Spanish** (es): Available |
|
- **French** (fr): Available |
|
- **Italian** (it): Available |
|
- **Japanese** (ja): Available |
|
- **Korean** (ko): Available |
|
- **Sanskrit** (sa): Available |
|
- **Thai** (th): Available |
|
- **Chinese** (zh): Available |
|
|
|
### Dataset Statistics |
|
- **Available Languages**: 1 |
|
- **Total Size**: ~400GB |
|
- **Format**: WebDataset (TAR archives) |
|
- **Chunk Size**: 5GB per TAR file |
|
|
|
### Dataset Structure |
|
|
|
This dataset uses the WebDataset format where each sample is stored as separate files within TAR archives: |
|
|
|
- **Images**: Document images in JPG format (PNG converted to JPG for optimization) |
|
- **Metadata Files**: Separate text/JSON files for each field: |
|
- `XXXXXXXX.jpg`: The document image |
|
- `XXXXXXXX.image_id.txt`: Unique identifier for the image |
|
- `XXXXXXXX.font_used.txt`: Font information used in the document |
|
- `XXXXXXXX.regions.json`: Text regions with bounding boxes and OCR results |
|
- `XXXXXXXX.vqa.json`: Visual Question Answering annotations |
|
|
|
### Usage |
|
|
|
#### Loading with WebDataset library (Recommended for large datasets) |
|
|
|
```python |
|
import webdataset as wds |
|
import json |
|
from PIL import Image |
|
import io |
|
|
|
# Create a WebDataset from TAR files |
|
dataset = wds.WebDataset("path/to/language/tarfiles/*.tar") |
|
|
|
# Process the dataset |
|
for sample in dataset: |
|
# Access image |
|
image_data = sample["jpg"] # Raw image bytes |
|
image = Image.open(io.BytesIO(image_data)) |
|
|
|
# Access metadata |
|
image_id = sample["image_id.txt"].decode('utf-8') |
|
font_used = sample["font_used.txt"].decode('utf-8') |
|
regions = json.loads(sample["regions.json"].decode('utf-8')) |
|
vqa_data = json.loads(sample["vqa.json"].decode('utf-8')) |
|
|
|
print(f"Image ID: {image_id}") |
|
print(f"Font: {font_used}") |
|
print(f"Regions: {len(regions)}") |
|
print(f"VQA entries: {len(vqa_data)}") |
|
``` |
|
|
|
#### Loading with HuggingFace datasets library |
|
|
|
```python |
|
from datasets import load_dataset |
|
import json |
|
|
|
# Load specific language subset |
|
dataset = load_dataset("webdataset", data_dir="hf://datasets/Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset/fr", split="train") |
|
|
|
# Or with streaming for memory efficiency |
|
dataset = load_dataset("webdataset", data_dir="hf://datasets/Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset/fr", split="train", streaming=True) |
|
|
|
# Access data |
|
for sample in dataset: |
|
image = sample["jpg"] # PIL Image |
|
image_id = sample["image_id.txt"] # string |
|
font_used = sample["font_used.txt"] # string |
|
regions = json.loads(sample["regions.json"]) # parsed JSON |
|
vqa_data = json.loads(sample["vqa.json"]) # parsed JSON |
|
``` |
|
|
|
#### Manual download and processing |
|
|
|
```python |
|
from huggingface_hub import hf_hub_download |
|
import tarfile |
|
import webdataset as wds |
|
|
|
# Download a specific TAR file |
|
tar_path = hf_hub_download( |
|
repo_id="Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset", |
|
filename="bn/bn_00000.tar", |
|
repo_type="dataset" |
|
) |
|
|
|
# Process with webdataset |
|
dataset = wds.WebDataset(tar_path) |
|
for sample in dataset: |
|
# Process sample |
|
pass |
|
``` |
|
|
|
### Performance Tips |
|
|
|
1. **Streaming**: Use `streaming=True` for large datasets to avoid downloading everything at once |
|
2. **WebDataset library**: Use the `webdataset` library directly for maximum performance |
|
3. **Parallel processing**: WebDataset supports parallel processing and data pipeline optimization |
|
4. **Selective loading**: Download only the language TAR files you need |
|
|
|
### File Organization |
|
|
|
``` |
|
repository/ |
|
βββ ar/ |
|
β βββ ar_00000.tar |
|
β βββ ar_00001.tar |
|
β βββ ... |
|
βββ fr/ |
|
β βββ fr_00000.tar |
|
β βββ fr_00001.tar |
|
β βββ ... |
|
βββ README.md |
|
``` |
|
|
|
### Metadata Schema |
|
|
|
#### regions.json |
|
```json |
|
[ |
|
{ |
|
"bbox": {"xmin": 10, "ymin": 20, "xmax": 100, "ymax": 50}, |
|
"english_text": "Original text", |
|
"translated_text": "Translated text", |
|
"layout_type": "paragraph", |
|
"region_id": 1 |
|
} |
|
] |
|
``` |
|
|
|
#### vqa.json |
|
```json |
|
{ |
|
"questions": [ |
|
{ |
|
"question": "What is the main topic?", |
|
"answer": "Document analysis", |
|
"type": "topic", |
|
"options": ["Analysis", "Summary", "Review"] |
|
} |
|
] |
|
} |
|
``` |
|
|
|
|