Add task category and link to paper

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +79 -8
README.md CHANGED
@@ -45,16 +45,19 @@ configs:
45
  path: data/visual_metaphor-*
46
  - split: visual_basic
47
  path: data/visual_basic-*
48
-
 
 
 
49
  ---
50
 
 
51
 
52
-
53
- # InfoChartQA: Benchmark for Multimodal Question Answering on Infographic Charts
54
 
55
  🤗[Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)
56
 
57
- # Dataset
58
  You can find our dataset on huggingface: 🤗[InfoChartQA Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)
59
 
60
  # Usage
@@ -89,11 +92,14 @@ You should store and evaluate model's response as:
89
  def build_question(query):#to build the question
90
  question = ""
91
  if "prompt" in query:
92
- question = question + f"{query["prompt"]}\n"
93
- question = question + f"{query["question"]}\n"
 
 
94
  if "options" in query:
95
  for _ in query["options"]:
96
- question = question + f"{_} {query['options'][_]}\n"
 
97
  if "instructions" in query:
98
  question = question + query["instructions"]
99
  return question
@@ -116,6 +122,71 @@ from checker import evaluate
116
  evaluate("model_reponse.json", "path_to_save_the_result")
117
  ```
118
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
119
 
 
120
 
121
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  path: data/visual_metaphor-*
46
  - split: visual_basic
47
  path: data/visual_basic-*
48
+ task_categories:
49
+ - table-question-answering
50
+ language:
51
+ - en
52
  ---
53
 
54
+ # InfoChartQA: Benchmark for Multimodal Question Answering on Infographic Charts
55
 
56
+ [Paper](https://arxiv.org/abs/2505.19028)
 
57
 
58
  🤗[Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)
59
 
60
+ # Dataset
61
  You can find our dataset on huggingface: 🤗[InfoChartQA Dataset](https://huggingface.co/datasets/Jietson/InfoChartQA)
62
 
63
  # Usage
 
92
  def build_question(query):#to build the question
93
  question = ""
94
  if "prompt" in query:
95
+ question = question + f"{query["prompt"]}
96
+ "
97
+ question = question + f"{query["question"]}
98
+ "
99
  if "options" in query:
100
  for _ in query["options"]:
101
+ question = question + f"{_} {query['options'][_]}
102
+ "
103
  if "instructions" in query:
104
  question = question + query["instructions"]
105
  return question
 
122
  evaluate("model_reponse.json", "path_to_save_the_result")
123
  ```
124
 
125
+ Or simply use after your answer is generated:
126
+
127
+ ```python
128
+ python -c "import checker; checker.evaluate(sys.argv[1], sys.argv[2])" PATH_TO_INPUT_FILE PATH_TO_INPUT_FILE
129
+ ```
130
+
131
+ # LeaderBoard
132
+
133
+ | Model | Infographic | Plain | Δ | Basic | Metaphor | Avg. |
134
+ |------------------------------|-------------|---------|-------|--------|----------|--------|
135
+ | **Baselines** | | | | | | |
136
+ | Human | 95.35\* | 96.28\* | 0.93 | 93.17\*| 88.69 | 90.93 |
137
+ | **Proprietary Models** | | | | | | |
138
+ | OpenAI O4-mini | 79.41 | 94.61 | 15.20 | 92.12 | 54.76 | 73.44 | | GPT-4.1 | 70.01 | 83.36 | 13.35 | 88.47 | 50.87 | 69.67 |
139
+ | GPT-4o | 66.09 | 81.77 | 15.68 | 81.77 | 47.19 | 64.48 |
140
+ | Claude 3.5 Sonnet | 65.67 | 83.11 | 17.44 | 90.36 | 55.33 | 72.85 |
141
+ | Gemini 2.5 Pro Preview | 83.31 | 93.88 | 10.07 | 90.01 | 60.42 | 75.22 |
142
+ | Gemini 2.5 Flash Preview | 71.91 | 84.66 | 12.75 | 82.02 | 56.28 | 69.15 |
143
+ | **Open-Source Models** | | | | | | |
144
+ | Qwen2.5-VL-72B | 62.06 | 78.47 | 16.41 | 77.34 | 54.64 | 65.99 |
145
+ | Llama-4 Scout | 67.41 | 84.84 | 17.43 | 81.76 | 51.89 | 66.83 |
146
+ | Intern-VL3-78B | 66.38 | 82.18 | 15.80 | 79.46 | 51.52 | 65.49 |
147
+ | Intern-VL3-8B | 56.82 | 73.50 | 16.68 | 74.26 | 49.57 | 61.92 |
148
+ | Janus Pro | 29.61 | 45.29 | 15.68 | 41.18 | 42.21 | 41.69 |
149
+ | DeepSeek VL2 | 39.81 | 47.01 | 7.20 | 58.72 | 44.54 | 51.63 |
150
+ | Phi-4 | 46.20 | 66.97 | 20.77 | 61.87 | 38.31 | 50.09 |
151
+ | LLaVA OneVision Chat 78B | 47.78 | 63.66 | 15.88 | 62.11 | 50.22 | 56.17 |
152
+ | LLaVA OneVision Chat 7B | 38.41 | 54.43 | 16.02 | 61.03 | 45.67 | 53.35 |
153
+ | Pixtral | 44.70 | 60.88 | 16.11 | 64.23 | 50.87 | 57.55 |
154
+ | Ovis1.6-Gemma2-9B | 50.56 | 64.52 | 13.98 | 60.96 | 34.42 | 47.69 |
155
+ | ChartGemma | 19.99 | 33.81 | 13.82 | 30.52 | 33.77 | 32.15 |
156
+ | TinyChart | 26.34 | 44.73 | 18.39 | 14.72 | 9.03 | 11.88 |
157
+ | ChartInstruct-LLama2 | 20.55 | 27.91 | 7.36 | 33.86 | 33.12 | 33.49 |
158
+
159
+ # License
160
+
161
+ Our original data contributions (all data except the charts) are distributed under the [CC BY-SA 4.0](https://github.com/princeton-nlp/CharXiv/blob/main/data/LICENSE) license. Our code is licensed under [Apache 2.0](https://github.com/princeton-nlp/CharXiv/blob/main/LICENSE) license. The copyright of the charts belong to the original authors.
162
 
163
+ ## Paper Links
164
 
165
+ ### 📌 Main Paper (This Repository)
166
+
167
+ - **[InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic Charts](https://arxiv.org/abs/2505.19028)**
168
+ _Minzhi Lin, Tianchi Xie, Mengchen Liu, Yilin Ye, Changjian Chen, Shixia Liu_
169
+
170
+ ### Relevant Papers
171
+
172
+ - **[OrionBench: A Benchmark for Chart and Human-Recognizable Object Detection in Infographics](https://arxiv.org/abs/2505.17473)** Jiangning Zhu, Yuxing Zhou, Zheng Wang, Juntao Yao, Yima Gu, Yuhui Yuan, Shixia Liu_
173
+
174
+ - **[ChartGalaxy: A Dataset for Infographic Chart Understanding and Generation](https://arxiv.org/abs/2505.18668)**
175
+ _Zhen Li, Duan Li, Yukai Guo, Xinyuan Guo, Bowen Li, Lanxi Xiao, Shenyu Qiao, Jiashu Chen, Zijian Wu, Hui Zhang, Xinhuan Shu, Shixia Liu_
176
+
177
+
178
+ ## Cite
179
+
180
+ If you use our work and are inspired by our work, please consider cite us (available soon):
181
+
182
+ ```
183
+ @misc{lin2025infochartqabenchmarkmultimodalquestion,
184
+ title={InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic Charts},
185
+ author={Minzhi Lin and Tianchi Xie and Mengchen Liu and Yilin Ye and Changjian Chen and Shixia Liu},
186
+ year={2025},
187
+ eprint={2505.19028},
188
+ archivePrefix={arXiv},
189
+ primaryClass={cs.CV},
190
+ url={https://arxiv.org/abs/2505.19028},
191
+ }
192
+ ```