File size: 10,819 Bytes
f02f0be
 
 
 
 
 
 
 
 
 
 
222c006
 
72d000b
 
 
 
 
f02f0be
eaf29da
f02f0be
 
 
 
 
d7cd7c2
 
f02f0be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1d7706
f02f0be
 
 
 
 
 
 
 
d7cd7c2
 
f02f0be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d7cd7c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f02f0be
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
---
license: apache-2.0
language:
- zh
- en
tags:
- vlm
- benchmark
- graphic-reasoning
- intelligence-test
---
# 🧠 ReasonBench: Benchmarking and Improving Visual Language Models for Complex Graphic Reasoning


<img src="https://huggingface.co/datasets/cistine/ReasonBench/resolve/main/image_1.jpg" 
       alt="background" 
       width="50%"/>
  <p style="font-style:italic">image:background</p>


## 🌐 Overview
**ReasonBench** is a comprehensive benchmark designed to evaluate Visual Language Models (VLMs) on complex graphical reasoning tasks. It contains **1,613 problems** collected from real-world intelligence tests, covering **11 core cognitive dimensions** and **29 task types**. This benchmark provides a robust framework for assessing VLMs' spatial, relational, and abstract reasoning capabilities.

**Dataset Type**: Visual Language Reasoning · Graphical Reasoning · Benchmark Evaluation

**Paper Link**:[https://arxiv.org/abs/2508.00323](https://arxiv.org/abs/2508.00323)

## 📊 Dataset Structure
### Core Cognitive Dimensions & Task Types
| Cognitive Dimension      | Task Type                   | Count |
|--------------------------|-----------------------------|-------|
| **Positional Patterns**  | Translation                 | 94    |
|                          | Rotation                    | 56    |
|                          | Combination                 | 30    |
| **Stylistic Patterns**   | Crossing                    | 54    |
|                          | Addition/Subtraction        | 67    |
|                          | Black/White Operation       | 63    |
| **Attribute Patterns**   | Symmetry                    | 109   |
|                          | Open/Close State            | 19    |
|                          | Combination                 | 6     |
| **Quantitative Patterns**| Lines                       | 173   |
|                          | Faces                       | 137   |
|                          | Points                      | 66    |
|                          | Elements                    | 94    |
|                          | Combination                 | 50    |
| **Spatial Patterns**     | Cubes                       | 109   |
|                          | 3D                          | 46    |
|                          | Polyhedrons                 | 17    |
|                          | Three Views                 | 40    |
|                          | Cross-Sections              | 35    |
|                          | Spatial Quantitative Trans. | 10    |
| **Special Patterns**     | 2D Combination              | 31    |
|                          | Figure Relations            | 40    |
| **Alphanumeric**         | Alphanumeric                | 27    |
| **B&W Blocks**           | Black & White Blocks        | 32    |
| **Other Patterns**       | Comprehensive               | 34    |
| **MENSA**                | Task 1                      | 35    |
|                          | Task 2                      | 39    |
| **Raven**                | Task 1                      | 40    |
|                          | Task 2                      | 60    |

### 🖼️ Input Formats
| Format                | Description |
|-----------------------|-------------|
| **Integrated Format** | Presents questions and options in a single image for holistic processing |
| **Separated Format**  | Splits questions and options into multiple images for step-by-step reasoning |

## 🔍 Key Features
- **Multi-format Evaluation**: Supports both integrated and separated input formats
- **Full Accessibility**: Provides public URLs for all images (questions, options, and combined sets)
- **Human Baseline**: Includes human performance metrics for comparison
- **Diverse Tasks**: Covers 29 distinct reasoning task types across 11 cognitive dimensions

## 🚀 Usage(GPT-4o example)
```python
import base64
import requests
import os
from openai import OpenAI  # Requires openai>=1.0.0

# Configuration
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
    raise ValueError("Missing OPENAI_API_KEY environment variable")

# Initialize client (official SDK approach)
client = OpenAI(api_key=api_key)

def process_image_question(image_path: str, question: str, max_tokens=300):
    """Send image and question to GPT-4o API"""
    # Encode image to base64
    base64_image = base64.b64encode(open(image_path, "rb").read()).decode("utf-8")

    # Construct messages payload
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "text", "text": question},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{base64_image}",
                        "detail": "auto"  # Options: low, high, auto
                    }
                }
            ]
        }
    ]

    # Make API request
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        max_tokens=max_tokens
    )
    
    return response.choices[0].message.content

# Example usage
if __name__ == "__main__":
    image_path = "path/to/your/image.jpg"  # Update with actual path
    user_question = "What's in this image?"  # Customize your question
    
    try:
        answer = process_image_question(image_path, user_question)
        print("AI Response:", answer)
    except Exception as e:
        print(f"Error: {str(e)}")
```

# 🧠 ReasonBench:复杂图形推理的视觉语言模型评估基准

## 🌐 概述
**ReasonBench** 是一个用于评估视觉语言模型(VLMs)在复杂图形推理任务表现的基准测试。数据集包含从真实智力测试中收集的 **1,613个问题**,覆盖**11个核心认知维度****29种任务类型**,为评估VLMs的空间、关系和抽象推理能力提供综合框架。

**数据集类型**:视觉语言推理 · 图形推理 · 基准评估

**论文地址**:[https://arxiv.org/abs/2508.00323](https://arxiv.org/abs/2508.00323)

## 📊 数据结构
### 核心认知维度与任务类型
| 认知维度            | 任务类型               | 数量 |
|---------------------|------------------------|------|
| **位置规律**        | 平移                  | 94   |
|                     | 旋转                  | 56   |
|                     | 组合                  | 30   |
| **样式规律**        | 穿越                  | 54   |
|                     | 加减法                | 67   |
|                     | 黑白运算              | 63   |
| **属性规律**        | 对称                  | 109  |
|                     | 开闭状态              | 19   |
|                     | 组合                  | 6    |
| **数量规律**        | 线                    | 173  |
|                     | 面                    | 137  |
|                     | 点                    | 66   |
|                     | 元素                  | 94   |
|                     | 组合                  | 50   |
| **空间规律**        | 立方体                | 109  |
|                     | 3D                    | 46   |
|                     | 多面体                | 17   |
|                     | 三视图                | 40   |
|                     | 剖视图                | 35   |
|                     | 空间数量变换          | 10   |
| **特殊规律**        | 2D组合                | 31   |
|                     | 图形关系              | 40   |
| **字母数字**        | 字母数字              | 27   |
| **黑白块**          | 黑白块                | 32   |
| **其他规律**        | 综合                  | 34   |
| **门萨**            | 任务1                 | 35   |
|                     | 任务2                 | 39   |
| **瑞文**            | 任务1                 | 40   |
|                     | 任务2                 | 60   |

### 🖼️ 输入格式
| 格式                | 描述 |
|---------------------|------|
| **集成格式**        | 问题与选项呈现在单个图形中,便于模型整体处理 |
| **分离格式**        | 将问题与选项拆分为多个图形,测试分步推理能力 |

## 🔍 核心特性
- **多格式评估**:支持整体式和分隔式两种输入格式
- **完全开放**:公开所有格式的图片URL(题目、选项、题目+选项)
- **人类基准**:提供人类准确率作为参考基准
- **多样化任务**:覆盖11个认知维度的29种推理任务

## 🚀 使用示例(以openai GPT-4o为例)
```python
import base64
import requests
import os

# 配置OpenAI API密钥
api_key = os.getenv("OPENAI_API_KEY")  # 建议将密钥存储在环境变量中
if not api_key:
    raise ValueError("请设置OPENAI_API_KEY环境变量")

# 图像处理函数
def encode_image(image_path):
    """将本地图像编码为base64字符串"""
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode('utf-8')

# 示例图像路径和问题
image_path = "path/to/your/image.jpg"  # 替换为你的图像路径
question = "描述这张图片的内容"  # 替换为你的问题

# 构建API请求
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {api_key}"
}

payload = {
    "model": "gpt-4o",  # 使用支持图像的模型
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": question
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{encode_image(image_path)}"
                    }
                }
            ]
        }
    ],
    "max_tokens": 300  # 控制响应长度
}

# 发送请求
response = requests.post(
    "https://api.openai.com/v1/chat/completions",
    headers=headers,
    json=payload
)

# 处理响应
if response.status_code == 200:
    result = response.json()
    answer = result['choices'][0]['message']['content']
    print("AI回答:", answer)
else:
    print("请求失败,状态码:", response.status_code)
    print("错误信息:", response.text)
```

如果需要引用,请引用下列内容
```
{
  author        = {Jianyi Zhang and Xu Ji and Ziyin Zhou and Yuchen Zhou and Shubo Shi and Haoyu Wu and Zhen Li and Shizhao Liu},
  title         = {Oedipus and the Sphinx: Benchmarking and Improving Visual Language Models for Complex Graphic Reasoning},
  howpublished  = {arXiv preprint},
  archivePrefix = {arXiv},
  eprint        = {2508.00323},
  primaryClass  = {cs.AI},
  year          = {2025},
  note          = {arXiv:2508.00323v1 [cs.AI]},
  url           = {https://arxiv.org/abs/2508.00323}
}
```