metadata
task_categories:
- image-to-text
- visual-question-answering
tags:
- ocr
- document-analysis
- multilingual
- vqa
- webdataset
size_categories:
- 100K<n<1M
configs:
- config_name: fr
data_files:
- split: train
path: fr/*.tar
- config_name: it
data_files:
- split: train
path: it/*.tar
Nayana-DocOCR Global Annotated Dataset
Dataset Description
This is a large-scale multilingual document OCR dataset containing approximately 400GB of images with comprehensive annotations across multiple global languages and English. The dataset is stored in WebDataset format using TAR archives for efficient streaming and processing.
Available Language Subsets
- fr (FR): Available
Dataset Statistics
- Available Languages: 1
- Total Size: ~400GB
- Format: WebDataset (TAR archives)
- Chunk Size: 5GB per TAR file
Dataset Structure
This dataset uses the WebDataset format where each sample is stored as separate files within TAR archives:
- Images: Document images in JPG format (PNG converted to JPG for optimization)
- Metadata Files: Separate text/JSON files for each field:
XXXXXXXX.jpg
: The document imageXXXXXXXX.image_id.txt
: Unique identifier for the imageXXXXXXXX.font_used.txt
: Font information used in the documentXXXXXXXX.regions.json
: Text regions with bounding boxes and OCR resultsXXXXXXXX.vqa.json
: Visual Question Answering annotations
Usage
Loading with WebDataset library (Recommended for large datasets)
import webdataset as wds
import json
from PIL import Image
import io
# Create a WebDataset from TAR files
dataset = wds.WebDataset("path/to/language/tarfiles/*.tar")
# Process the dataset
for sample in dataset:
# Access image
image_data = sample["jpg"] # Raw image bytes
image = Image.open(io.BytesIO(image_data))
# Access metadata
image_id = sample["image_id.txt"].decode('utf-8')
font_used = sample["font_used.txt"].decode('utf-8')
regions = json.loads(sample["regions.json"].decode('utf-8'))
vqa_data = json.loads(sample["vqa.json"].decode('utf-8'))
print(f"Image ID: {image_id}")
print(f"Font: {font_used}")
print(f"Regions: {len(regions)}")
print(f"VQA entries: {len(vqa_data)}")
Loading with HuggingFace datasets library
from datasets import load_dataset
import json
# Load specific language subset
dataset = load_dataset("webdataset", data_dir="hf://datasets/Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset/fr", split="train")
# Or with streaming for memory efficiency
dataset = load_dataset("webdataset", data_dir="hf://datasets/Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset/fr", split="train", streaming=True)
# Access data
for sample in dataset:
image = sample["jpg"] # PIL Image
image_id = sample["image_id.txt"] # string
font_used = sample["font_used.txt"] # string
regions = json.loads(sample["regions.json"]) # parsed JSON
vqa_data = json.loads(sample["vqa.json"]) # parsed JSON
Manual download and processing
from huggingface_hub import hf_hub_download
import tarfile
import webdataset as wds
# Download a specific TAR file
tar_path = hf_hub_download(
repo_id="Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset",
filename="bn/bn_00000.tar",
repo_type="dataset"
)
# Process with webdataset
dataset = wds.WebDataset(tar_path)
for sample in dataset:
# Process sample
pass
Performance Tips
- Streaming: Use
streaming=True
for large datasets to avoid downloading everything at once - WebDataset library: Use the
webdataset
library directly for maximum performance - Parallel processing: WebDataset supports parallel processing and data pipeline optimization
- Selective loading: Download only the language TAR files you need
File Organization
repository/
βββ ar/
β βββ ar_00000.tar
β βββ ar_00001.tar
β βββ ...
βββ fr/
β βββ fr_00000.tar
β βββ fr_00001.tar
β βββ ...
βββ README.md
Metadata Schema
regions.json
[
{
"bbox": {"xmin": 10, "ymin": 20, "xmax": 100, "ymax": 50},
"english_text": "Original text",
"translated_text": "Translated text",
"layout_type": "paragraph",
"region_id": 1
}
]
vqa.json
{
"questions": [
{
"question": "What is the main topic?",
"answer": "Document analysis",
"type": "topic",
"options": ["Analysis", "Summary", "Review"]
}
]
}