Jietson commited on
Commit
4544530
·
verified ·
1 Parent(s): b6653f3

Upload 2 files

Browse files
Files changed (2) hide show
  1. README2.md +172 -0
  2. teaser.jpg +3 -0
README2.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: question_id
5
+ dtype: string
6
+ - name: qtype
7
+ dtype: string
8
+ - name: figure_path
9
+ dtype: image
10
+ - name: visual_figure_path
11
+ list: image
12
+ - name: question
13
+ dtype: string
14
+ - name: answer
15
+ dtype: string
16
+ - name: instructions
17
+ dtype: string
18
+ - name: prompt
19
+ dtype: string
20
+ - name: options
21
+ list: string
22
+ splits:
23
+ - name: info
24
+ num_bytes: 9389294399.0
25
+ num_examples: 55091
26
+ - name: plain
27
+ num_bytes: 15950918129.0
28
+ num_examples: 55091
29
+ - name: visual_metaphor
30
+ num_bytes: 144053150.0
31
+ num_examples: 450
32
+ - name: visual_basic
33
+ num_bytes: 1254942699.466
34
+ num_examples: 7297
35
+ download_size: 20376840742
36
+ dataset_size: 26739208377.466
37
+ configs:
38
+ - config_name: default
39
+ data_files:
40
+ - split: info
41
+ path: data/info-*
42
+ - split: plain
43
+ path: data/plain-*
44
+ - split: visual_metaphor
45
+ path: data/visual_metaphor-*
46
+ - split: visual_basic
47
+ path: data/visual_basic-*
48
+ ---
49
+
50
+
51
+
52
+ # InfoChartQA: Benchmark for Multimodal Question Answering on Infographic Charts
53
+
54
+
55
+
56
+ ![xbhs3](teaser.jpg)🤗[Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)
57
+
58
+ # Dataset
59
+
60
+ You can find our dataset on huggingface: 🤗[InfoChartQA Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)
61
+
62
+ # Usage
63
+
64
+ Each question entry is arranged as:
65
+
66
+ ```
67
+ --question_id: int
68
+ --qtype: int
69
+ --figure_path: image
70
+ --visual_figure_path: list of image
71
+ --question: str
72
+ --answer: str
73
+ --instructions: str
74
+ --prompt: str
75
+ --options: list of dict ("A/B/C/D":"option_content")
76
+ ```
77
+
78
+ Each question is built as:
79
+
80
+ ```
81
+ image_input: figure_path, visual_figure_path_1...visual_figure_path_n (if any)
82
+ text_input: prompt (if any) + question + options (if any) + instructions (if any)
83
+ ```
84
+
85
+ # Evaluate
86
+
87
+ You should store and evaluate model's response as:
88
+
89
+ ```python
90
+ # Example code for evaluate
91
+ def build_question(query):#to build the question
92
+ question = ""
93
+ if "prompt" in query:
94
+ question = question + f"{query["prompt"]}\n"
95
+ question = question + f"{query["question"]}\n"
96
+ if "options" in query:
97
+ for _ in query["options"]:
98
+ question = question + f"{_} {query['options'][_]}\n"
99
+ if "instructions" in query:
100
+ question = question + query["instructions"]
101
+ return question
102
+
103
+ with open("visual_basic.json","r",encode="utf-8") as f:
104
+ queries = json.load(f)
105
+
106
+ for idx in range(queries):
107
+ question = build_question(queries[idx])
108
+ figure_path = [queries[idx]['figure_path']]
109
+ visual_figure_path = queries[idx]['visual_figure_path']
110
+ response = model.generate(question, [figure_path, visual_figure_path])# generate model's response based on
111
+ queries[idx]["response"] = reponse
112
+
113
+ with open("model_reponse.json","w",encode="utf-8") as f:
114
+ json.dump(queries, f)
115
+ from checker import evaluate
116
+ evaluate("model_reponse.json", "path_to_save_the_result")
117
+ ```
118
+
119
+ Or simply use after your answer is generated:
120
+
121
+ ```python
122
+ python -c "import checker; checker.evaluate(sys.argv[1], sys.argv[2])" PATH_TO_INPUT_FILE PATH_TO_INPUT_FILE
123
+ ```
124
+
125
+ # LeaderBoard
126
+
127
+ | Model | Infographic | Plain | Δ | Basic | Metaphor | Avg. |
128
+ | ------------------------ | ----------- | ------- | ----- | ------- | -------- | ----- |
129
+ | **Baselines** | | | | | | |
130
+ | Human | 95.35\* | 96.28\* | 0.93 | 93.17\* | 88.69 | 90.93 |
131
+ | **Proprietary Models** | | | | | | |
132
+ | OpenAI O4-mini | 79.41 | 94.61 | 15.20 | 92.12 | 54.76 | 73.44 |
133
+ | GPT-4o | 66.09 | 81.77 | 15.68 | 81.77 | 47.19 | 64.48 |
134
+ | Claude 3.5 Sonnet | 65.67 | 83.11 | 17.44 | 90.36 | 55.33 | 72.85 |
135
+ | Gemini 2.5 Pro Preview | 83.31 | 93.88 | 10.07 | 90.01 | 60.42 | 75.22 |
136
+ | Gemini 2.5 Flash Preview | 71.91 | 84.66 | 12.75 | 82.02 | 56.28 | 69.15 |
137
+ | **Open-Source Models** | | | | | | |
138
+ | Qwen2.5-VL-72B | 62.06 | 78.47 | 16.41 | 77.34 | 54.64 | 65.99 |
139
+ | Llama-4 Scout | 67.41 | 84.84 | 17.43 | 81.76 | 51.89 | 66.83 |
140
+ | Intern-VL3-78B | 66.38 | 82.18 | 15.80 | 79.46 | 51.52 | 65.49 |
141
+ | Intern-VL3-8B | 56.82 | 73.50 | 16.68 | 74.26 | 49.57 | 61.92 |
142
+ | Janus Pro | 29.61 | 45.29 | 15.68 | 41.18 | 42.21 | 41.69 |
143
+ | DeepSeek VL2 | 39.81 | 47.01 | 7.20 | 58.72 | 44.54 | 51.63 |
144
+ | Phi-4 | 46.20 | 66.97 | 20.77 | 61.87 | 38.31 | 50.09 |
145
+ | LLaVA OneVision Chat 78B | 47.78 | 63.66 | 15.88 | 62.11 | 50.22 | 56.17 |
146
+ | LLaVA OneVision Chat 7B | 38.41 | 54.43 | 16.02 | 61.03 | 45.67 | 53.35 |
147
+ | Pixtral | 44.70 | 60.88 | 16.11 | 64.23 | 50.87 | 57.55 |
148
+ | Ovis1.6-Gemma2-9B | 50.56 | 64.52 | 13.98 | 60.96 | 34.42 | 47.69 |
149
+ | ChartGemma | 19.99 | 33.81 | 13.82 | 30.52 | 33.77 | 32.15 |
150
+ | TinyChart | 26.34 | 44.73 | 18.39 | 14.72 | 9.03 | 11.88 |
151
+ | ChartInstruct-LLama2 | 20.55 | 27.91 | 7.36 | 33.86 | 33.12 | 33.49 |
152
+
153
+ # License
154
+
155
+ Our original data contributions (all data except the charts) are distributed under the [CC BY-SA 4.0](https://github.com/princeton-nlp/CharXiv/blob/main/data/LICENSE) license. Our code is licensed under [Apache 2.0](https://github.com/princeton-nlp/CharXiv/blob/main/LICENSE) license. The copyright of the charts belong to the original authors.
156
+
157
+ ## Cite
158
+
159
+ If you use our work and are inspired by our work, please consider cite us (available soon):
160
+
161
+ ```
162
+ @misc{lin2025infochartqabenchmarkmultimodalquestion,
163
+ title={InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic Charts},
164
+ author={Minzhi Lin and Tianchi Xie and Mengchen Liu and Yilin Ye and Changjian Chen and Shixia Liu},
165
+ year={2025},
166
+ eprint={2505.19028},
167
+ archivePrefix={arXiv},
168
+ primaryClass={cs.CV},
169
+ url={https://arxiv.org/abs/2505.19028},
170
+ }
171
+ ```
172
+
teaser.jpg ADDED

Git LFS Details

  • SHA256: 600e69cf39220c852f81a9ecbc2fcb9f332b347f3a79ff33d61f4220127530db
  • Pointer size: 132 Bytes
  • Size of remote file: 3.11 MB