--- license: mit --- # ๐Ÿ“– READoc ๐Ÿ“„ [Paper](https://www.arxiv.org/abs/2409.05137) | ๐Ÿ’ป [Code](https://github.com/icip-cas/READoc) | *Current Version: v0.1* Dataset of READoc from the paper [READoc: A Unified Benchmark for Realistic Document Structured Extraction](https://arxiv.org/abs/2409.05137). ## Abstract Document Structured Extraction (DSE) aims to extract structured content from raw documents. Despite the emergence of numerous DSE systems (e.g. Marker, Nougat, GPT-4), their unified evaluation remains inadequate, significantly hindering the fieldโ€™s advancement. This problem is largely attributed to existing benchmark paradigms, which exhibit fragmented and localized characteristics. To address these limitations and offer a thorough evaluation of DSE systems, we introduce a novel benchmark named READoc, which defines DSE as a realistic task of converting unstructured PDFs into semantically rich Markdown. The READoc dataset is derived from 2,233 diverse and real-world documents from arXiv and GitHub. In addition, we develop a DSE Evaluation Suite comprising Standardization, Segmentation and Scoring modules, to conduct a unified evaluation of state-of-the-art DSE approaches. By evaluating a range of pipeline tools, expert visual models, and general VLMs, we identify the gap between current work and the unified, realistic DSE objective for the first time. We aspire that READoc will catalyze future research in DSE, fostering more comprehensive and practical solutions. ## Note Please note that we have not yet released the full dataset; For now, we are providing the complete set of PDFs here. You can send [me](lizichao2022@iscas.ac.cn) the Markdown files generated by you DSE systems, and I will calculate the evaluation scores for you. This evaluation flow will be improved in the future. You can also look into our [Github Repository](https://github.com/icip-cas/READoc), which contains a few sample documents as a reference.