Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
jieyuzhang commited on
Commit
0d48a97
·
verified ·
1 Parent(s): d44149d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -103
README.md CHANGED
@@ -1,103 +1,188 @@
1
- ---
2
- license: apache-2.0
3
- dataset_info:
4
- features:
5
- - name: data_path
6
- sequence: string
7
- - name: generator
8
- dtype: string
9
- - name: question
10
- dtype: string
11
- - name: answer
12
- dtype: string
13
- - name: options
14
- sequence: string
15
- - name: metadata
16
- dtype: string
17
- splits:
18
- - name: dcs_sa
19
- num_bytes: 1192380951
20
- num_examples: 2294572
21
- - name: dcs_mc
22
- num_bytes: 1313184418
23
- num_examples: 2294572
24
- - name: dcm_sa_2_img
25
- num_bytes: 858402949
26
- num_examples: 1400000
27
- - name: dcm_mc_2_img
28
- num_bytes: 931128693
29
- num_examples: 1400000
30
- - name: dcm_sa_3_img
31
- num_bytes: 1167523949
32
- num_examples: 1400000
33
- - name: dcm_mc_3_img
34
- num_bytes: 1297530106
35
- num_examples: 1400000
36
- - name: dcm_sa_4_img
37
- num_bytes: 1435043372
38
- num_examples: 1400000
39
- - name: dcm_mc_4_img
40
- num_bytes: 1596677323
41
- num_examples: 1400000
42
- - name: vgs_sa
43
- num_bytes: 595577425
44
- num_examples: 1537630
45
- - name: vgs_mc
46
- num_bytes: 671343503
47
- num_examples: 1537630
48
- - name: vgm_sa_2_img
49
- num_bytes: 536078137
50
- num_examples: 1400000
51
- - name: vgm_mc_2_img
52
- num_bytes: 612895409
53
- num_examples: 1400000
54
- - name: vgm_sa_3_img
55
- num_bytes: 693450488
56
- num_examples: 1400000
57
- - name: vgm_mc_3_img
58
- num_bytes: 830159021
59
- num_examples: 1400000
60
- - name: vgm_sa_4_img
61
- num_bytes: 802710456
62
- num_examples: 1400000
63
- - name: vgm_mc_4_img
64
- num_bytes: 972149375
65
- num_examples: 1400000
66
- download_size: 5904415104
67
- dataset_size: 15506235575
68
- configs:
69
- - config_name: default
70
- data_files:
71
- - split: dcs_sa
72
- path: data/dcs_sa-*
73
- - split: dcs_mc
74
- path: data/dcs_mc-*
75
- - split: dcm_sa_2_img
76
- path: data/dcm_sa_2_img-*
77
- - split: dcm_mc_2_img
78
- path: data/dcm_mc_2_img-*
79
- - split: dcm_sa_3_img
80
- path: data/dcm_sa_3_img-*
81
- - split: dcm_mc_3_img
82
- path: data/dcm_mc_3_img-*
83
- - split: dcm_sa_4_img
84
- path: data/dcm_sa_4_img-*
85
- - split: dcm_mc_4_img
86
- path: data/dcm_mc_4_img-*
87
- - split: vgs_sa
88
- path: data/vgs_sa-*
89
- - split: vgs_mc
90
- path: data/vgs_mc-*
91
- - split: vgm_sa_2_img
92
- path: data/vgm_sa_2_img-*
93
- - split: vgm_mc_2_img
94
- path: data/vgm_mc_2_img-*
95
- - split: vgm_sa_3_img
96
- path: data/vgm_sa_3_img-*
97
- - split: vgm_mc_3_img
98
- path: data/vgm_mc_3_img-*
99
- - split: vgm_sa_4_img
100
- path: data/vgm_sa_4_img-*
101
- - split: vgm_mc_4_img
102
- path: data/vgm_mc_4_img-*
103
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ dataset_info:
4
+ features:
5
+ - name: data_path
6
+ sequence: string
7
+ - name: generator
8
+ dtype: string
9
+ - name: question
10
+ dtype: string
11
+ - name: answer
12
+ dtype: string
13
+ - name: options
14
+ sequence: string
15
+ - name: metadata
16
+ dtype: string
17
+ splits:
18
+ - name: dcs_sa
19
+ num_bytes: 1192380951
20
+ num_examples: 2294572
21
+ - name: dcs_mc
22
+ num_bytes: 1313184418
23
+ num_examples: 2294572
24
+ - name: dcm_sa_2_img
25
+ num_bytes: 858402949
26
+ num_examples: 1400000
27
+ - name: dcm_mc_2_img
28
+ num_bytes: 931128693
29
+ num_examples: 1400000
30
+ - name: dcm_sa_3_img
31
+ num_bytes: 1167523949
32
+ num_examples: 1400000
33
+ - name: dcm_mc_3_img
34
+ num_bytes: 1297530106
35
+ num_examples: 1400000
36
+ - name: dcm_sa_4_img
37
+ num_bytes: 1435043372
38
+ num_examples: 1400000
39
+ - name: dcm_mc_4_img
40
+ num_bytes: 1596677323
41
+ num_examples: 1400000
42
+ - name: vgs_sa
43
+ num_bytes: 595577425
44
+ num_examples: 1537630
45
+ - name: vgs_mc
46
+ num_bytes: 671343503
47
+ num_examples: 1537630
48
+ - name: vgm_sa_2_img
49
+ num_bytes: 536078137
50
+ num_examples: 1400000
51
+ - name: vgm_mc_2_img
52
+ num_bytes: 612895409
53
+ num_examples: 1400000
54
+ - name: vgm_sa_3_img
55
+ num_bytes: 693450488
56
+ num_examples: 1400000
57
+ - name: vgm_mc_3_img
58
+ num_bytes: 830159021
59
+ num_examples: 1400000
60
+ - name: vgm_sa_4_img
61
+ num_bytes: 802710456
62
+ num_examples: 1400000
63
+ - name: vgm_mc_4_img
64
+ num_bytes: 972149375
65
+ num_examples: 1400000
66
+ download_size: 5904415104
67
+ dataset_size: 15506235575
68
+ configs:
69
+ - config_name: default
70
+ data_files:
71
+ - split: dcs_sa
72
+ path: data/dcs_sa-*
73
+ - split: dcs_mc
74
+ path: data/dcs_mc-*
75
+ - split: dcm_sa_2_img
76
+ path: data/dcm_sa_2_img-*
77
+ - split: dcm_mc_2_img
78
+ path: data/dcm_mc_2_img-*
79
+ - split: dcm_sa_3_img
80
+ path: data/dcm_sa_3_img-*
81
+ - split: dcm_mc_3_img
82
+ path: data/dcm_mc_3_img-*
83
+ - split: dcm_sa_4_img
84
+ path: data/dcm_sa_4_img-*
85
+ - split: dcm_mc_4_img
86
+ path: data/dcm_mc_4_img-*
87
+ - split: vgs_sa
88
+ path: data/vgs_sa-*
89
+ - split: vgs_mc
90
+ path: data/vgs_mc-*
91
+ - split: vgm_sa_2_img
92
+ path: data/vgm_sa_2_img-*
93
+ - split: vgm_mc_2_img
94
+ path: data/vgm_mc_2_img-*
95
+ - split: vgm_sa_3_img
96
+ path: data/vgm_sa_3_img-*
97
+ - split: vgm_mc_3_img
98
+ path: data/vgm_mc_3_img-*
99
+ - split: vgm_sa_4_img
100
+ path: data/vgm_sa_4_img-*
101
+ - split: vgm_mc_4_img
102
+ path: data/vgm_mc_4_img-*
103
+ ---
104
+
105
+
106
+ <h1 align="center">
107
+ ProVision: Programmatically Scaling Vision-centric Instruction Data for Multimodal Language Models
108
+ </h1>
109
+
110
+ ProVision is an extendable data generation engine which produces instruction data for large multimodal language models (MLMs).
111
+
112
+ In particular, it synthesizes instruction data via data generators (Python programs) and scene graphs rather than proprietary models. It also includes a scene graph generation pipeline consisting of various state-of-the-art models (eg, object detection model). Thus, one can generate instruction data for any given image by first generating the scene graph and then apply data generators.
113
+
114
+ Provision supports generation of both single-image and multi-image instruction data. One can also extend the engine by adding new data generators.
115
+
116
+ **You are currently viewing the ProVision-10M dataset.**
117
+
118
+ ![pipeline](pipeline.png)
119
+
120
+ ## Dataset Details
121
+
122
+ ### Dataset Sources
123
+
124
+ - **Repository**: https://github.com/JieyuZ2/ProVision
125
+ - **Paper:**
126
+ - **Blog:**
127
+ - **Source Data:** [Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html)/[GQA](https://cs.stanford.edu/people/dorarad/gqa/about.html) and [DataComp](https://www.datacomp.ai/dcclip/index.html#home)
128
+
129
+ ## Uses
130
+
131
+ ### Direct Use
132
+
133
+ <!-- This section describes suitable use cases for the dataset. -->
134
+
135
+ ProVision-10M is designed to facilitate research in training multimodal language models.
136
+
137
+ ### Out-of-Scope Use
138
+
139
+ <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
140
+
141
+ ProVision-10M was built to make research into large multimodal models more accessible. Using
142
+ the dataset to train models that ingest or generate personally identifying information (such
143
+ as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of ProVision-10M.
144
+
145
+ ## Dataset Creation
146
+
147
+ ### Curation Rationale
148
+
149
+ ProVision-10M was created to demonstrate the potential of programmatically synthesizing instruction data for training multimodal language models.
150
+
151
+ ### Source Data
152
+
153
+ The dataset is built upon two data sources:
154
+
155
+ - we use 74,289 images and scene graphs from Visual Genome(the GQA version)
156
+ - we use 126,106 images from DataComp
157
+
158
+ ### Dataset summary
159
+
160
+ **We do not release the images, please download the images from their original sources (GQA/DataComp)**
161
+
162
+ | Split | Size | Format | Description |
163
+ | :------------| :------ | :------ | :---- |
164
+ | vgs_sa | 1537630 | short answer | single-image instruction data based on Visual Genome |
165
+ | vgs_mc | 1537630 | multiple choice | single-image instruction data based on Visual Genome |
166
+ | vgm_sa_2_img | 1400000 | short answer | 2-image instruction data based on Visual Genome |
167
+ | vgm_mc_2_img | 1400000 | multiple choice | 2-image instruction data based on Visual Genome |
168
+ | vgm_sa_3_img | 1400000 | short answer | 3-image instruction data based on Visual Genome |
169
+ | vgm_mc_3_img | 1400000 | multiple choice | 3-image instruction data based on Visual Genome |
170
+ | vgm_sa_4_img | 1400000 | short answer | 4-image instruction data based on Visual Genome |
171
+ | vgm_mc_4_img | 1400000 | multiple choice | 4-image instruction data based on Visual Genome |
172
+ | dcs_sa | 2294572 | short answer | single-image instruction data based on DataComp images |
173
+ | dcs_mc | 2294572 | multiple choice | single-image instruction data based on DataComp images |
174
+ | dcm_sa_2_img | 1400000 | short answer | 2-image instruction data based on DataComp images |
175
+ | dcm_mc_2_img | 1400000 | multiple choice | 2-image instruction data based on DataComp images |
176
+ | dcm_sa_3_img | 1400000 | short answer | 3-image instruction data based on DataComp images |
177
+ | dcm_mc_3_img | 1400000 | multiple choice | 3-image instruction data based on DataComp images |
178
+ | dcm_sa_4_img | 1400000 | short answer | 4-image instruction data based on DataComp images |
179
+ | dcm_mc_4_img | 1400000 | multiple choice | 4-image instruction data based on DataComp images |
180
+
181
+ ## License
182
+ We release ProVision-10M under a Apache License 2.0.
183
+
184
+ ## Citation
185
+
186
+ ```
187
+
188
+ ```