VyoJ's picture
Update README.md
49c7342 verified
metadata
task_categories:
  - image-to-text
  - visual-question-answering
tags:
  - ocr
  - document-analysis
  - multilingual
  - vqa
  - webdataset
size_categories:
  - 100K<n<1M
configs:
  - config_name: fr
    data_files:
      - split: train
        path: fr/*.tar
  - config_name: it
    data_files:
      - split: train
        path: it/*.tar

Nayana-DocOCR Global Annotated Dataset

Dataset Description

This is a large-scale multilingual document OCR dataset containing approximately 400GB of images with comprehensive annotations across multiple global languages and English. The dataset is stored in WebDataset format using TAR archives for efficient streaming and processing.

Available Language Subsets

  • fr (FR): Available

Dataset Statistics

  • Available Languages: 1
  • Total Size: ~400GB
  • Format: WebDataset (TAR archives)
  • Chunk Size: 5GB per TAR file

Dataset Structure

This dataset uses the WebDataset format where each sample is stored as separate files within TAR archives:

  • Images: Document images in JPG format (PNG converted to JPG for optimization)
  • Metadata Files: Separate text/JSON files for each field:
    • XXXXXXXX.jpg: The document image
    • XXXXXXXX.image_id.txt: Unique identifier for the image
    • XXXXXXXX.font_used.txt: Font information used in the document
    • XXXXXXXX.regions.json: Text regions with bounding boxes and OCR results
    • XXXXXXXX.vqa.json: Visual Question Answering annotations

Usage

Loading with WebDataset library (Recommended for large datasets)

import webdataset as wds
import json
from PIL import Image
import io

# Create a WebDataset from TAR files
dataset = wds.WebDataset("path/to/language/tarfiles/*.tar")

# Process the dataset
for sample in dataset:
    # Access image
    image_data = sample["jpg"]  # Raw image bytes
    image = Image.open(io.BytesIO(image_data))
    
    # Access metadata
    image_id = sample["image_id.txt"].decode('utf-8')
    font_used = sample["font_used.txt"].decode('utf-8')
    regions = json.loads(sample["regions.json"].decode('utf-8'))
    vqa_data = json.loads(sample["vqa.json"].decode('utf-8'))
    
    print(f"Image ID: {image_id}")
    print(f"Font: {font_used}")
    print(f"Regions: {len(regions)}")
    print(f"VQA entries: {len(vqa_data)}")

Loading with HuggingFace datasets library

from datasets import load_dataset
import json

# Load specific language subset
dataset = load_dataset("webdataset", data_dir="hf://datasets/Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset/fr", split="train")

# Or with streaming for memory efficiency
dataset = load_dataset("webdataset", data_dir="hf://datasets/Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset/fr", split="train", streaming=True)

# Access data
for sample in dataset:
    image = sample["jpg"]  # PIL Image
    image_id = sample["image_id.txt"]  # string
    font_used = sample["font_used.txt"]  # string
    regions = json.loads(sample["regions.json"])  # parsed JSON
    vqa_data = json.loads(sample["vqa.json"])  # parsed JSON

Manual download and processing

from huggingface_hub import hf_hub_download
import tarfile
import webdataset as wds

# Download a specific TAR file
tar_path = hf_hub_download(
    repo_id="Nayana-cognitivelab/NayanaDocs-Global-45k-webdataset",
    filename="bn/bn_00000.tar",
    repo_type="dataset"
)

# Process with webdataset
dataset = wds.WebDataset(tar_path)
for sample in dataset:
    # Process sample
    pass

Performance Tips

  1. Streaming: Use streaming=True for large datasets to avoid downloading everything at once
  2. WebDataset library: Use the webdataset library directly for maximum performance
  3. Parallel processing: WebDataset supports parallel processing and data pipeline optimization
  4. Selective loading: Download only the language TAR files you need

File Organization

repository/
β”œβ”€β”€ ar/
β”‚   β”œβ”€β”€ ar_00000.tar
β”‚   β”œβ”€β”€ ar_00001.tar
β”‚   └── ...
β”œβ”€β”€ fr/
β”‚   β”œβ”€β”€ fr_00000.tar
β”‚   β”œβ”€β”€ fr_00001.tar
β”‚   └── ...
└── README.md

Metadata Schema

regions.json

[
  {
    "bbox": {"xmin": 10, "ymin": 20, "xmax": 100, "ymax": 50},
    "english_text": "Original text",
    "translated_text": "Translated text",
    "layout_type": "paragraph",
    "region_id": 1
  }
]

vqa.json

{
  "questions": [
    {
      "question": "What is the main topic?",
      "answer": "Document analysis",
      "type": "topic",
      "options": ["Analysis", "Summary", "Review"]
    }
  ]
}