Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
nlile parquet-converter commited on
Commit
2d7dd60
·
verified ·
0 Parent(s):

Duplicate from google-research-datasets/mbpp

Browse files

Co-authored-by: Parquet-converter (BOT) <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - expert-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - expert-generated
8
+ language:
9
+ - en
10
+ license:
11
+ - cc-by-4.0
12
+ multilinguality:
13
+ - monolingual
14
+ size_categories:
15
+ - n<1K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - text2text-generation
20
+ task_ids: []
21
+ pretty_name: Mostly Basic Python Problems
22
+ tags:
23
+ - code-generation
24
+ dataset_info:
25
+ - config_name: full
26
+ features:
27
+ - name: task_id
28
+ dtype: int32
29
+ - name: text
30
+ dtype: string
31
+ - name: code
32
+ dtype: string
33
+ - name: test_list
34
+ sequence: string
35
+ - name: test_setup_code
36
+ dtype: string
37
+ - name: challenge_test_list
38
+ sequence: string
39
+ splits:
40
+ - name: train
41
+ num_bytes: 176879
42
+ num_examples: 374
43
+ - name: test
44
+ num_bytes: 244104
45
+ num_examples: 500
46
+ - name: validation
47
+ num_bytes: 42405
48
+ num_examples: 90
49
+ - name: prompt
50
+ num_bytes: 4550
51
+ num_examples: 10
52
+ download_size: 236069
53
+ dataset_size: 467938
54
+ - config_name: sanitized
55
+ features:
56
+ - name: source_file
57
+ dtype: string
58
+ - name: task_id
59
+ dtype: int32
60
+ - name: prompt
61
+ dtype: string
62
+ - name: code
63
+ dtype: string
64
+ - name: test_imports
65
+ sequence: string
66
+ - name: test_list
67
+ sequence: string
68
+ splits:
69
+ - name: train
70
+ num_bytes: 63453
71
+ num_examples: 120
72
+ - name: test
73
+ num_bytes: 132720
74
+ num_examples: 257
75
+ - name: validation
76
+ num_bytes: 20050
77
+ num_examples: 43
78
+ - name: prompt
79
+ num_bytes: 3407
80
+ num_examples: 7
81
+ download_size: 115422
82
+ dataset_size: 219630
83
+ configs:
84
+ - config_name: full
85
+ data_files:
86
+ - split: train
87
+ path: full/train-*
88
+ - split: test
89
+ path: full/test-*
90
+ - split: validation
91
+ path: full/validation-*
92
+ - split: prompt
93
+ path: full/prompt-*
94
+ default: true
95
+ - config_name: sanitized
96
+ data_files:
97
+ - split: train
98
+ path: sanitized/train-*
99
+ - split: test
100
+ path: sanitized/test-*
101
+ - split: validation
102
+ path: sanitized/validation-*
103
+ - split: prompt
104
+ path: sanitized/prompt-*
105
+ ---
106
+
107
+ # Dataset Card for Mostly Basic Python Problems (mbpp)
108
+
109
+ ## Table of Contents
110
+ - [Dataset Card for Mostly Basic Python Problems (mbpp)](#dataset-card-for-mostly-basic-python-problems-(mbpp))
111
+ - [Table of Contents](#table-of-contents)
112
+ - [Dataset Description](#dataset-description)
113
+ - [Dataset Summary](#dataset-summary)
114
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
115
+ - [Languages](#languages)
116
+ - [Dataset Structure](#dataset-structure)
117
+ - [Data Instances](#data-instances)
118
+ - [Data Fields](#data-fields)
119
+ - [Data Splits](#data-splits)
120
+ - [Dataset Creation](#dataset-creation)
121
+ - [Curation Rationale](#curation-rationale)
122
+ - [Source Data](#source-data)
123
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
124
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
125
+ - [Annotations](#annotations)
126
+ - [Annotation process](#annotation-process)
127
+ - [Who are the annotators?](#who-are-the-annotators)
128
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
129
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
130
+ - [Social Impact of Dataset](#social-impact-of-dataset)
131
+ - [Discussion of Biases](#discussion-of-biases)
132
+ - [Other Known Limitations](#other-known-limitations)
133
+ - [Additional Information](#additional-information)
134
+ - [Dataset Curators](#dataset-curators)
135
+ - [Licensing Information](#licensing-information)
136
+ - [Citation Information](#citation-information)
137
+ - [Contributions](#contributions)
138
+
139
+ ## Dataset Description
140
+ - **Repository:** https://github.com/google-research/google-research/tree/master/mbpp
141
+ - **Paper:** [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732)
142
+
143
+ ### Dataset Summary
144
+ The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. As described in the paper, a subset of the data has been hand-verified by us.
145
+
146
+ Released [here](https://github.com/google-research/google-research/tree/master/mbpp) as part of [Program Synthesis with Large Language Models, Austin et. al., 2021](https://arxiv.org/abs/2108.07732).
147
+
148
+ ### Supported Tasks and Leaderboards
149
+ This dataset is used to evaluate code generations.
150
+
151
+ ### Languages
152
+ English - Python code
153
+
154
+ ## Dataset Structure
155
+
156
+ ```python
157
+ dataset_full = load_dataset("mbpp")
158
+ DatasetDict({
159
+ test: Dataset({
160
+ features: ['task_id', 'text', 'code', 'test_list', 'test_setup_code', 'challenge_test_list'],
161
+ num_rows: 974
162
+ })
163
+ })
164
+
165
+ dataset_sanitized = load_dataset("mbpp", "sanitized")
166
+ DatasetDict({
167
+ test: Dataset({
168
+ features: ['source_file', 'task_id', 'prompt', 'code', 'test_imports', 'test_list'],
169
+ num_rows: 427
170
+ })
171
+ })
172
+ ```
173
+
174
+ ### Data Instances
175
+
176
+ #### mbpp - full
177
+ ```
178
+ {
179
+ 'task_id': 1,
180
+ 'text': 'Write a function to find the minimum cost path to reach (m, n) from (0, 0) for the given cost matrix cost[][] and a position (m, n) in cost[][].',
181
+ 'code': 'R = 3\r\nC = 3\r\ndef min_cost(cost, m, n): \r\n\ttc = [[0 for x in range(C)] for x in range(R)] \r\n\ttc[0][0] = cost[0][0] \r\n\tfor i in range(1, m+1): \r\n\t\ttc[i][0] = tc[i-1][0] + cost[i][0] \r\n\tfor j in range(1, n+1): \r\n\t\ttc[0][j] = tc[0][j-1] + cost[0][j] \r\n\tfor i in range(1, m+1): \r\n\t\tfor j in range(1, n+1): \r\n\t\t\ttc[i][j] = min(tc[i-1][j-1], tc[i-1][j], tc[i][j-1]) + cost[i][j] \r\n\treturn tc[m][n]',
182
+ 'test_list': [
183
+ 'assert min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2) == 8',
184
+ 'assert min_cost([[2, 3, 4], [5, 9, 3], [2, 6, 4]], 2, 2) == 12',
185
+ 'assert min_cost([[3, 4, 5], [6, 10, 4], [3, 7, 5]], 2, 2) == 16'],
186
+ 'test_setup_code': '',
187
+ 'challenge_test_list': []
188
+ }
189
+ ```
190
+ #### mbpp - sanitized
191
+ ```
192
+ {
193
+ 'source_file': 'Benchmark Questions Verification V2.ipynb',
194
+ 'task_id': 2,
195
+ 'prompt': 'Write a function to find the shared elements from the given two lists.',
196
+ 'code': 'def similar_elements(test_tup1, test_tup2):\n res = tuple(set(test_tup1) & set(test_tup2))\n return (res) ',
197
+ 'test_imports': [],
198
+ 'test_list': [
199
+ 'assert set(similar_elements((3, 4, 5, 6),(5, 7, 4, 10))) == set((4, 5))',
200
+ 'assert set(similar_elements((1, 2, 3, 4),(5, 4, 3, 7))) == set((3, 4))',
201
+ 'assert set(similar_elements((11, 12, 14, 13),(17, 15, 14, 13))) == set((13, 14))'
202
+ ]
203
+ }
204
+ ```
205
+ ### Data Fields
206
+
207
+ - `source_file`: unknown
208
+ - `text`/`prompt`: description of programming task
209
+ - `code`: solution for programming task
210
+ - `test_setup_code`/`test_imports`: necessary code imports to execute tests
211
+ - `test_list`: list of tests to verify solution
212
+ - `challenge_test_list`: list of more challenging test to further probe solution
213
+
214
+ ### Data Splits
215
+ There are two version of the dataset (full and sanitized), each with four splits:
216
+ - train
217
+ - evaluation
218
+ - test
219
+ - prompt
220
+
221
+ The `prompt` split corresponds to samples used for few-shot prompting and not for training.
222
+
223
+ ## Dataset Creation
224
+ See section 2.1 of original [paper](https://arxiv.org/abs/2108.07732).
225
+
226
+ ### Curation Rationale
227
+ In order to evaluate code generation functions a set of simple programming tasks as well as solutions is necessary which this dataset provides.
228
+
229
+ ### Source Data
230
+
231
+ #### Initial Data Collection and Normalization
232
+ The dataset was manually created from scratch.
233
+
234
+ #### Who are the source language producers?
235
+ The dataset was created with an internal crowdsourcing effort at Google.
236
+
237
+ ### Annotations
238
+
239
+ #### Annotation process
240
+ The full dataset was created first and a subset then underwent a second round to improve the task descriptions.
241
+
242
+ #### Who are the annotators?
243
+ The dataset was created with an internal crowdsourcing effort at Google.
244
+
245
+ ### Personal and Sensitive Information
246
+ None.
247
+
248
+ ## Considerations for Using the Data
249
+ Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
250
+
251
+ ### Social Impact of Dataset
252
+ With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
253
+
254
+ ### Discussion of Biases
255
+
256
+ ### Other Known Limitations
257
+ Since the task descriptions might not be expressive enough to solve the task. The `sanitized` split aims at addressing this issue by having a second round of annotators improve the dataset.
258
+
259
+ ## Additional Information
260
+
261
+ ### Dataset Curators
262
+ Google Research
263
+
264
+ ### Licensing Information
265
+ CC-BY-4.0
266
+
267
+ ### Citation Information
268
+ ```
269
+ @article{austin2021program,
270
+ title={Program Synthesis with Large Language Models},
271
+ author={Austin, Jacob and Odena, Augustus and Nye, Maxwell and Bosma, Maarten and Michalewski, Henryk and Dohan, David and Jiang, Ellen and Cai, Carrie and Terry, Michael and Le, Quoc and others},
272
+ journal={arXiv preprint arXiv:2108.07732},
273
+ year={2021}
274
+ ```
275
+ ### Contributions
276
+ Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
full/prompt-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a053e4bb85ceb77430ae80592addb4ca4dc6ba087592f9e04537800ee88b7431
3
+ size 7878
full/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:566fd53060ffba5766dace1d1e2f4c38906781526de222b0dfbdbc325b696c77
3
+ size 115824
full/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09d125ca31edacb7800be8c67c45abff618faf0214ff551291817d06bdb914ae
3
+ size 87223
full/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f0ec060987432d99fe8fb409d31e6c67445b208a01741c5583517c80a10fe80
3
+ size 25144
sanitized/prompt-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73c623309b7b5d65fd5661204b35f779f8e66301aa9832d1ad4b8fc3b21151fd
3
+ size 6717
sanitized/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9e9efa2c0d59ef5e55537a9d126b8f875d5ac010a8d75628d76824884e15850
3
+ size 60864
sanitized/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d95f8ad6d2fff08fe4826122d6e3e31f75716825d0c5c340d297aca5e9e0de0e
3
+ size 33854
sanitized/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27e065fcab3c863959933328a7fdbf404e1bcb5464b1be6fe0dcd9530e420204
3
+ size 13987