likaixin commited on
Commit
d0184ce
1 Parent(s): cb1b741

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +203 -1
README.md CHANGED
@@ -10,4 +10,206 @@ tags:
10
  - code
11
  size_categories:
12
  - 1K<n<10K
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - code
11
  size_categories:
12
  - 1K<n<10K
13
+ ---
14
+
15
+
16
+
17
+ <div style="display: flex; align-items: center;">
18
+ <img src="https://huggingface.co/datasets/likaixin/MMCode/resolve/main/logo.png" alt="MMCode Logo" style="width: 50px;margin-right: 20px;"/>
19
+ <div style="display: flex; align-items: center; font-size: 40px; font-weight: bold;">MMCode (Preview v1.0)</div>
20
+ </div>
21
+
22
+ 🚀 **MMCode is actively under development! If you find it useful and are interested in making it better and joining our team, please [drop an email](mailto:[email protected]). We'd love to hear from you!**
23
+
24
+ ## Dataset Description
25
+ MMCode is a multi-modal code generation dataset designed to evaluate the problem-solving skills of code language models in visually rich contexts (i.e. images).
26
+ It contains 3,548 questions paired with 6,622 images, derived from real-world programming challenges across 10 code competition websites, with Python solutions and tests provided.
27
+ The dataset emphasizes the extreme demand for reasoning abilities, the interwoven nature of textual and visual contents, and the occurrence of questions containing multiple images.
28
+
29
+ ## Languages
30
+ - The primary language of the dataset content is English, with some questions translated from Japanese (raw data is provided for reference).
31
+ - The programming language is Python.
32
+
33
+ ## Dataset Structure
34
+ Each problem includes a programming question paired with relevant images, Python solutions, and test cases and other metadata.
35
+
36
+ **Data Fields:**
37
+
38
+ | Field | Type | Description |
39
+ |--------------------------|--------|-------------------------------------------------------------------------------------------------------|
40
+ | problem_id | string | A unique identifier of the problem, e.g. cf_1_A.
41
+ | name | string | A descriptive title for the coding problem crawled from the question. |
42
+ | source | string | The origin platform of the problem. |
43
+ | url | string | A link to the original problem on the source website. |
44
+ | question | string | The textual description of the programming challenge, outlining the problem context, objectives, and constraints. Images are represented as markdown tags (e.g. \!\[image\](1.png))|
45
+ | raw_problem | string | Contains the raw HTML of the problem description, which include formatting and embedded image tags for the online version. |
46
+ | solutions | list[string] | A list of strings representing the solutions to the problem. |
47
+ | input_output | dict[string, list] | A dict containing arrays of test inputs and the expected outputs.|
48
+ | images | list[string] | An array of base64 images associated with the problem. |
49
+ | picture_num | int | The number of images in the problem. |
50
+ | image_tags | list | A list categorizing the types of images associated with the problem. |
51
+ | starter_code* | string | Initial code of the problem. This could be an empty string if no starter code is provided. |
52
+ | difficulty* | string | A classification of the problem's difficulty level. |
53
+ | raw_tags* | list | Raw tags associated with the problem. |
54
+ | tags* | list | A list of strings representing various categories or concepts that the problem touches upon. |
55
+ | skill_types* | list | A list of strings, similar to Tags, that classify the problem based on the skills or knowledge areas required to solve it. |
56
+ | Expected Time Complexity* | string | Describes the expected time complexity of the solution, if applicable. |
57
+ | Expected Auxiliary Space* | string | Information on the auxiliary space requirement for the solution, if applicable. |
58
+ | time_limit*\^ | string | The maximum allowed execution time for the solution. |
59
+ | memory_limit*\^ | string | The maximum allowed memory usage for the solution. |
60
+
61
+ \* Fields inherited from the [TACO dataset](https://huggingface.co/datasets/BAAI/TACO).
62
+
63
+ \^ These limits are not considered by the testing framework due to hardware discrepencies.
64
+
65
+ ## Key Features
66
+ - **Multi-Modal Challenges**: MMCode is the first work towards code generation combining textual and visual information, requiring models to interpret and integrate both modalities of data for problem-solving.
67
+
68
+ - **Rich Diversity**: With 3,548 questions paired with 6,622 images, sourced from 10 different coding competition websites, the dataset offers a diverse range of real-world programming challenges.
69
+
70
+ - **Detailed Annotations**: The dataset includes detailed annotations for images, categorizing them into types `Data Structure`, `Geometry`, `3D`, `Demonstration`, `Math`, `Table`, `Pseudocode`, and `Others`, which allows for a detailed analysis of model performance across different visual information types.
71
+
72
+ - **Automatic Testing Framework**: Each problem is accompanied by input-output pairs that serve as automated test cases to rigorously evaluate the correctness of the solutions provided by the models.
73
+
74
+ ## Data Split
75
+ The whole dataset is used as a test set. A `mini test set` split consisting of 300 problems is randomly sampled from the dataset for efficient evaluation.
76
+
77
+ ## Data Example
78
+ ```json
79
+ {
80
+ "solutions": "[\"(X, Y, Z) = map(int, input().split(' ')) ... \"]",
81
+ "starter_code": "",
82
+ "input_output": "{\"inputs\": [\"13 3 1\\n\", \"12 3 1\\n\", \"100000 1 1\\n\"], \"outputs\": [\"3\\n\", \"2\\n\", \"49999\\n\"]}",
83
+ "difficulty": "EASY",
84
+ "raw_tags": "[]",
85
+ "name": "AtCoder Beginner Contest 078 - ISU",
86
+ "source": "atcoder",
87
+ "tags": "[]",
88
+ "skill_types": "[]",
89
+ "url": "https://atcoder.jp/contests/abc078/tasks/abc078_b",
90
+ "Expected Auxiliary Space": null,
91
+ "time_limit": "2.0 seconds",
92
+ "memory_limit": "256.0 megabytes",
93
+ "Expected Time Complexity": null,
94
+ "raw_problem": "<div id=\"task-statement\">\n<span class=\"lang\">...There is just enough room for three, as shown below:</p>\n<div style=\"text-align: center;\">\n<img src=\"https://img.atcoder.jp/abc078/4a35302937c3cbc2f625156e7834d27f.png\">\n<p>Figure</p>\n</img>...</div>",
95
+ "question": "We have a long seat of width $X$ centimeters.\r\nThere are many people who wants to sit here. A person sitting on the seat will always occupy an interval of length $Y$ centimeters.\nWe would like to seat as many people as possible, but they are all very shy, and there must be a gap of length at least $Z$ centimeters between two people, and between the end of the seat and a person.\nAt most how many people can sit on the seat?\n\nConstraints\n\n- All input values are integers.\n- $1 \\leq X, Y, Z \\leq 10^5$\n- $Y+2Z \\leq X$\n\nInput\nInput is given from Standard Input in the following format:\n$X$ $Y$ $Z$\r\n\nOutput\nPrint the answer.\n\nSample Input 1\n13 3 1\r\n\nSample Output 1\n3\r\n\nThere is just enough room for three, as shown below:\n\n![image](1.png)\nFigure\n\nSample Input 2\n12 3 1\r\n\nSample Output 2\n2\r\n\nSample Input 3\n100000 1 1\r\n\nSample Output 3\n49999\r\n\nSample Input 4\n64146 123 456\r\n\nSample Output 4\n110\r\n\nSample Input 5\n64145 123 456\r\n\nSample Output 5\n109",
96
+ "images": "\\[<base 64 image>\\]",
97
+ "picture_num": 1,
98
+ "image_tags": [
99
+ "Demonstration"
100
+ ],
101
+ "data_split": "core_test"
102
+ }
103
+ ```
104
+
105
+ The rendered problem will look like this:
106
+
107
+ <div style="border: 2px solid #007BFF; padding: 20px; margin: 20px 0; border-radius: 5px; background-color: #f0f0f0;">
108
+
109
+ We have a long seat of width $X$ centimeters.
110
+
111
+ There are many people who wants to sit here. A person sitting on the seat will always occupy an interval of length $Y$ centimeters.
112
+
113
+ We would like to seat as many people as possible, but they are all very shy, and there must be a gap of length at least $Z$ centimeters between two people, and between the end of the seat and a person.
114
+
115
+ At most how many people can sit on the seat?
116
+
117
+ Constraints
118
+
119
+ - All input values are integers.
120
+ - $1 \leq X, Y, Z \leq 10^5$
121
+ - $Y+2Z \leq X$
122
+
123
+ **Input**
124
+
125
+ Input is given from Standard Input in the following format:
126
+
127
+ $X$ $Y$ $Z$
128
+
129
+ **Output**
130
+
131
+ Print the answer.
132
+
133
+ **Sample Input 1**
134
+
135
+ 13 3 1
136
+
137
+ **Sample Output 1**
138
+
139
+ 3
140
+
141
+ There is just enough room for three, as shown below:
142
+
143
+ ![image](https://img.atcoder.jp/abc078/4a35302937c3cbc2f625156e7834d27f.png)
144
+
145
+
146
+ **Sample Input 2**
147
+
148
+ 12 3 1
149
+
150
+ **Sample Output 2**
151
+
152
+ 2
153
+
154
+ **Sample Input 3**
155
+
156
+ 100000 1 1
157
+
158
+ **Sample Output 3**
159
+
160
+ 49999
161
+
162
+ **Sample Input 4**
163
+
164
+ 64146 123 456
165
+
166
+ **Sample Output 4**
167
+
168
+ 110
169
+
170
+ **Sample Input 5**
171
+
172
+ 64145 123 456
173
+
174
+ **Sample Output 5**
175
+
176
+ 109
177
+
178
+ </div>
179
+
180
+ ## Data Collection
181
+ We collected statements and images from 10 coding platforms. Web crawlers extracted problem details and images, which were then filtered and standardized for consistency. Images were converted to PNG and inserted to the text using markdown tags to maintain their position. We reused metadata from the TACO dataset when possible.
182
+
183
+ Automated filtering removed questions without images or with loading issues, ensuring only high-quality, relevant data was included. Human reviewers further refined the dataset by eliminating irrelevant images, such as teasers. The final step involved annotating images into specific categories based on their content.
184
+
185
+ ## Data Source and License
186
+ We are sincerely grateful for the following coding platforms, and the [TACO dataset](https://huggingface.co/datasets/BAAI/TACO).
187
+
188
+ | Website | URL | License |
189
+ |------------------|----------------------------------------|------------------------|
190
+ | Aizu | https://judge.u-aizu.ac.jp/onlinejudge | No terms or license found |
191
+ | CodeForces | https://codeforces.com | [No license found](https://codeforces.com/terms) |
192
+ | Project Euler | https://projecteuler.net | [CC BY-NC-SA 4.0](https://projecteuler.net/copyright) |
193
+ | AtCoder* | https://atcoder.jp | - |
194
+ | CodeChef* | https://www.codechef.com | - |
195
+ | CodeWars* | https://www.codewars.com | - |
196
+ | Geeksforgeeks* | https://www.geeksforgeeks.org | - |
197
+ | HackerRank* | https://www.hackerrank.com | - |
198
+ | Leetcode* | https://leetcode.com | - |
199
+ | Open Kattis* | https://open.kattis.com | - |
200
+
201
+ \* The problem and metadata in MMCode is expanded from existing data in [TACO](https://huggingface.co/datasets/BAAI/TACO), which is [Apache 2.0](https://huggingface.co/datasets/BAAI/TACO#license).
202
+
203
+ Please contact us if there are data license issues.
204
+
205
+
206
+ ## Citation
207
+ Please cite our work if you find it useful:
208
+ ```
209
+ @misc{li2024MMCode,
210
+ author = {Kaixin, Li and Yuchen, Tian and Qisheng, Hu},
211
+ title = {MMCode: Evaluating Multi-Modal Code Language Models with Visually Rich Programming Problems},
212
+ year = {2024},
213
+ howpublished = {\url{https://github.com/Happylkx/MMCode}},
214
+ }
215
+ ```