raksama19 commited on
Commit
aab4b72
·
verified ·
1 Parent(s): 47ffd67

Delete hf_model/README.md

Browse files
Files changed (1) hide show
  1. hf_model/README.md +0 -90
hf_model/README.md DELETED
@@ -1,90 +0,0 @@
1
- ---
2
- license: mit
3
- language:
4
- - zh
5
- - en
6
- tags:
7
- - document-parsing
8
- - document-understanding
9
- - document-intelligence
10
- - ocr
11
- - layout-analysis
12
- - table-extraction
13
- - multimodal
14
- - vision-language-model
15
- datasets:
16
- - custom
17
- pipeline_tag: image-text-to-text
18
- library_name: transformers
19
- ---
20
-
21
-
22
- # Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting
23
-
24
- <a href="https://github.com/bytedance/Dolphin"><img src="https://img.shields.io/badge/Code-Github-blue"></a>
25
-
26
- <!--
27
- <div align="center">
28
- <img src="https://cdn.wandeer.world/null/dolphin_demo.gif" width="800">
29
- </div>
30
- -->
31
-
32
- ## Model Description
33
-
34
- Dolphin (**Do**cument Image **P**arsing via **H**eterogeneous Anchor Prompt**in**g) is a novel multimodal document image parsing model that follows an analyze-then-parse paradigm. It addresses the challenges of complex document understanding through a two-stage approach designed to handle intertwined elements such as text paragraphs, figures, formulas, and tables.
35
-
36
- ## 📑 Overview
37
-
38
- Document image parsing is challenging due to its complexly intertwined elements such as text paragraphs, figures, formulas, and tables. Dolphin addresses these challenges through a two-stage approach:
39
-
40
- 1. **🔍 Stage 1**: Comprehensive page-level layout analysis by generating element sequence in natural reading order
41
- 2. **🧩 Stage 2**: Efficient parallel parsing of document elements using heterogeneous anchors and task-specific prompts
42
-
43
- <!-- <div align="center">
44
- <img src="https://cdn.wandeer.world/null/dolphin_framework.png" width="680">
45
- </div> -->
46
-
47
- Dolphin achieves promising performance across diverse page-level and element-level parsing tasks while ensuring superior efficiency through its lightweight architecture and parallel parsing mechanism.
48
-
49
- ## Model Architecture
50
-
51
- Dolphin is built on a vision-encoder-decoder architecture using transformers:
52
-
53
- - **Vision Encoder**: Based on Swin Transformer for extracting visual features from document images
54
- - **Text Decoder**: Based on MBart for decoding text from visual features
55
- - **Prompt-based interface**: Uses natural language prompts to control parsing tasks
56
-
57
- The model is implemented as a Hugging Face `VisionEncoderDecoderModel` for easy integration with the Transformers ecosystem.
58
-
59
- ## Usage
60
-
61
- Our demo will be released in these days. Please keep tuned! 🔥
62
-
63
- Please refer to our [GitHub repository](https://github.com/bytedance/Dolphin) for detailed usage.
64
-
65
- - [Page-wise parsing](https://github.com/bytedance/Dolphin/demo_page_hf.py): for an entire document image
66
- - [Element-wise parsing](https://github.com/bytedance/Dolphin/demo_element_hf.py): for an element (paragraph, table, formula) image
67
-
68
-
69
- ## License
70
-
71
- This model is released under the MIT License.
72
-
73
- ## Citation
74
-
75
- ```bibtex
76
- @inproceedings{dolphin2025,
77
- title={Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting},
78
- author={Feng, Hao and Wei, Shu and Fei, Xiang and Shi, Wei and Han, Yingdong and Liao, Lei and Lu, Jinghui and Wu, Binghong and Liu, Qi and Lin, Chunhui and Tang, Jingqun and Liu, Hao and Huang, Can},
79
- year={2025},
80
- booktitle={Proceedings of the 65rd Annual Meeting of the Association for Computational Linguistics (ACL)}
81
- }
82
- ```
83
-
84
- ## Acknowledgements
85
-
86
- This model builds on several open-source projects including:
87
- - [Hugging Face Transformers](https://github.com/huggingface/transformers)
88
- - [Donut](https://github.com/clovaai/donut/)
89
- - [Nougat](https://github.com/facebookresearch/nougat)
90
- - [Swin Transformer](https://github.com/microsoft/Swin-Transformer)