Datasets:
Tasks:
Image-to-Text
Modalities:
Text
Formats:
webdataset
Languages:
English
Size:
1K - 10K
License:
How were the PDF annotations generated?
#3
by
nfliu
- opened
Hi! I see in the repo README it mentions that:
Initially, we started from the readily available ~11TB zip files from PDFA in their initial data release. From the pdf digital files, we extracted words, bounding boxes and image bounding boxes that are available in the pdf file. This information is then reshaped into lines organized in reading order, under the key lines. We keep non-reshaped word and bounding box information under the word key, should users want to use their own heuristic.
Would it be possible to provide a bit more information about how the words / bounding boxes / image bounding boxes were extracted? Thanks!