File size: 11,511 Bytes
a685b6e 2309942 d0184ce e5159ab d0184ce 54038b5 d0184ce e5159ab d0184ce e5159ab d0184ce 875f29e d0184ce 875f29e d0184ce 13b1000 d0184ce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
---
license: mit
task_categories:
- question-answering
- text-generation
language:
- en
- ja
tags:
- code
size_categories:
- 1K<n<10K
---
<div style="display: flex; align-items: center;">
<img src="https://huggingface.co/datasets/likaixin/MMCode/resolve/main/logo.png" alt="MMCode Logo" style="width: 50px;margin-right: 20px;"/>
<div style="display: flex; align-items: center; font-size: 40px; font-weight: bold;">MMCode</div>
</div>
<div><a href="https://github.com/happylkx/MMCode">MMCode Github Repo</a> <img style="display:inline" src="https://img.shields.io/github/stars/happylkx/MMCode?style=for-the-badge
" /></div>
## Dataset Description
MMCode is a multi-modal code generation dataset designed to evaluate the problem-solving skills of code language models in visually rich contexts (i.e. images).
It contains 3,548 questions paired with 6,620 images, derived from real-world programming challenges across 10 code competition websites, with Python solutions and tests provided.
The dataset emphasizes the extreme demand for reasoning abilities, the interwoven nature of textual and visual contents, and the occurrence of questions containing multiple images.
## Languages
- The primary language of the dataset content is English, with some questions translated from Japanese (raw data is provided for reference).
- The programming language is Python.
## Dataset Structure
Each problem includes a programming question paired with relevant images, Python solutions, and test cases and other metadata.
**Data Fields:**
| Field | Type | Description |
|--------------------------|--------|-------------------------------------------------------------------------------------------------------|
| problem_id | string | A unique identifier of the problem, e.g. cf_1_A.
| name | string | A descriptive title for the coding problem crawled from the question. |
| source | string | The origin platform of the problem. |
| url | string | A link to the original problem on the source website. |
| question | string | The textual description of the programming challenge, outlining the problem context, objectives, and constraints. Images are represented as markdown tags (e.g. \!\[image\](1.png))|
| raw_problem | string | Contains the raw HTML of the problem description, which include formatting and embedded image tags for the online version. |
| solutions | list[string] | A list of strings representing the solutions to the problem. |
| input_output | dict[string, list] | A dict containing arrays of test inputs and the expected outputs.|
| images | list[string] | An array of base64 images associated with the problem. |
| picture_num | int | The number of images in the problem. |
| image_tags | list | A list categorizing the types of images associated with the problem. |
| starter_code* | string | Initial code of the problem. This could be an empty string if no starter code is provided. |
| difficulty* | string | A classification of the problem's difficulty level. |
| raw_tags* | list | Raw tags associated with the problem. |
| tags* | list | A list of strings representing various categories or concepts that the problem touches upon. |
| skill_types* | list | A list of strings, similar to Tags, that classify the problem based on the skills or knowledge areas required to solve it. |
| Expected Time Complexity* | string | Describes the expected time complexity of the solution, if applicable. |
| Expected Auxiliary Space* | string | Information on the auxiliary space requirement for the solution, if applicable. |
| time_limit*\^ | string | The maximum allowed execution time for the solution. |
| memory_limit*\^ | string | The maximum allowed memory usage for the solution. |
\* Fields inherited from the [TACO dataset](https://huggingface.co/datasets/BAAI/TACO).
\^ These limits are not considered by the testing framework due to hardware discrepencies.
## Key Features
- **Multi-Modal Challenges**: MMCode is the first work towards code generation combining textual and visual information, requiring models to interpret and integrate both modalities of data for problem-solving.
- **Rich Diversity**: With 3,548 questions paired with 6,620 images, sourced from 10 different coding competition websites, the dataset offers a diverse range of real-world programming challenges.
- **Detailed Annotations**: The dataset includes detailed annotations for images, categorizing them into types ``Linear Data Structure``, ``Tree``, ``Graph``, ``2D Geometry``, ``3D Geometry``, ``Chessboard``, ``Map``, ``Patterns``, ``Math``, ``Table``, ``Pseudocode``, and ``Others``, which allows for a detailed analysis of model performance across different visual information types.
- **Automatic Testing Framework**: Each problem is accompanied by input-output pairs that serve as automated test cases to rigorously evaluate the correctness of the solutions provided by the models.
## Data Split
The test set is made up of 263 problems for efficient evaluation.
## Data Example
```json
{
"solutions": "[\"(X, Y, Z) = map(int, input().split(' ')) ... \"]",
"starter_code": "",
"input_output": "{\"inputs\": [\"13 3 1\\n\", \"12 3 1\\n\", \"100000 1 1\\n\"], \"outputs\": [\"3\\n\", \"2\\n\", \"49999\\n\"]}",
"difficulty": "EASY",
"raw_tags": "[]",
"name": "AtCoder Beginner Contest 078 - ISU",
"source": "atcoder",
"tags": "[]",
"skill_types": "[]",
"url": "https://atcoder.jp/contests/abc078/tasks/abc078_b",
"Expected Auxiliary Space": null,
"time_limit": "2.0 seconds",
"memory_limit": "256.0 megabytes",
"Expected Time Complexity": null,
"raw_problem": "<div id=\"task-statement\">\n<span class=\"lang\">...There is just enough room for three, as shown below:</p>\n<div style=\"text-align: center;\">\n<img src=\"https://img.atcoder.jp/abc078/4a35302937c3cbc2f625156e7834d27f.png\">\n<p>Figure</p>\n</img>...</div>",
"question": "We have a long seat of width $X$ centimeters.\r\nThere are many people who wants to sit here. A person sitting on the seat will always occupy an interval of length $Y$ centimeters.\nWe would like to seat as many people as possible, but they are all very shy, and there must be a gap of length at least $Z$ centimeters between two people, and between the end of the seat and a person.\nAt most how many people can sit on the seat?\n\nConstraints\n\n- All input values are integers.\n- $1 \\leq X, Y, Z \\leq 10^5$\n- $Y+2Z \\leq X$\n\nInput\nInput is given from Standard Input in the following format:\n$X$ $Y$ $Z$\r\n\nOutput\nPrint the answer.\n\nSample Input 1\n13 3 1\r\n\nSample Output 1\n3\r\n\nThere is just enough room for three, as shown below:\n\n![image](1.png)\nFigure\n\nSample Input 2\n12 3 1\r\n\nSample Output 2\n2\r\n\nSample Input 3\n100000 1 1\r\n\nSample Output 3\n49999\r\n\nSample Input 4\n64146 123 456\r\n\nSample Output 4\n110\r\n\nSample Input 5\n64145 123 456\r\n\nSample Output 5\n109",
"images": "\\[<base 64 image>\\]",
"picture_num": 1,
"image_tags": [
"Demonstration"
],
"data_split": "core_test"
}
```
The rendered problem will look like this:
<div style="border: 2px solid #007BFF; padding: 20px; margin: 20px 0; border-radius: 5px; background-color: #f0f0f0;">
We have a long seat of width $X$ centimeters.
There are many people who wants to sit here. A person sitting on the seat will always occupy an interval of length $Y$ centimeters.
We would like to seat as many people as possible, but they are all very shy, and there must be a gap of length at least $Z$ centimeters between two people, and between the end of the seat and a person.
At most how many people can sit on the seat?
Constraints
- All input values are integers.
- $1 \leq X, Y, Z \leq 10^5$
- $Y+2Z \leq X$
**Input**
Input is given from Standard Input in the following format:
$X$ $Y$ $Z$
**Output**
Print the answer.
**Sample Input 1**
13 3 1
**Sample Output 1**
3
There is just enough room for three, as shown below:
![image](https://img.atcoder.jp/abc078/4a35302937c3cbc2f625156e7834d27f.png)
**Sample Input 2**
12 3 1
**Sample Output 2**
2
**Sample Input 3**
100000 1 1
**Sample Output 3**
49999
**Sample Input 4**
64146 123 456
**Sample Output 4**
110
**Sample Input 5**
64145 123 456
**Sample Output 5**
109
</div>
## Data Collection
We collected statements and images from 10 coding platforms. Web crawlers extracted problem details and images, which were then filtered and standardized for consistency. Images were converted to PNG and inserted to the text using markdown tags to maintain their position. We reused metadata from the TACO dataset when possible.
Automated filtering removed questions without images or with loading issues, ensuring only high-quality, relevant data was included. Human reviewers further refined the dataset by eliminating irrelevant images, such as teasers. The final step involved annotating images into specific categories based on their content.
## Data Source and License
We are sincerely grateful for the following coding platforms, and the [TACO dataset](https://huggingface.co/datasets/BAAI/TACO).
| Website | URL | License |
|------------------|----------------------------------------|------------------------|
| Aizu | https://judge.u-aizu.ac.jp/onlinejudge | No terms or license found |
| CodeForces | https://codeforces.com | [No license found](https://codeforces.com/terms) |
| Project Euler | https://projecteuler.net | [CC BY-NC-SA 4.0](https://projecteuler.net/copyright) |
| AtCoder* | https://atcoder.jp | - |
| CodeChef* | https://www.codechef.com | - |
| CodeWars* | https://www.codewars.com | - |
| Geeksforgeeks* | https://www.geeksforgeeks.org | - |
| HackerRank* | https://www.hackerrank.com | - |
| Leetcode* | https://leetcode.com | - |
| Open Kattis* | https://open.kattis.com | - |
\* The problem and metadata in MMCode is expanded from existing data in [TACO](https://huggingface.co/datasets/BAAI/TACO), which is [Apache 2.0](https://huggingface.co/datasets/BAAI/TACO#license).
Please contact us if there are data license issues.
## Citation
Please cite our work if you find it useful:
```
@misc{li2024mmcode,
title={MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems},
author={Kaixin Li and Yuchen Tian and Qisheng Hu and Ziyang Luo and Jing Ma},
year={2024},
eprint={2404.09486},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|