Datasets:

Modalities:
Image
Text
Formats:
json
Languages:
English
Libraries:
Datasets
Dask
License:
Queen-Vermouth commited on
Commit
c6b93b6
1 Parent(s): 743a703

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +247 -77
README.md CHANGED
@@ -10,137 +10,307 @@ language:
10
 
11
  This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
12
 
13
- ## Dataset Summary
14
- >The Artwork Explanation Generation Dataset is a novel resource designed to advance the field of large-scale vision-language models (LVLMs) by focusing on the intersection of art and artificial intelligence.
15
- >This dataset aims to challenge and enhance LVLMs' abilities in generating detailed, knowledgeable explanations of artworks, leveraging both visual and textual cues.
16
- >Created using a comprehensive collection of artwork articles from English Wikipedia, this dataset facilitates a unique task: generating coherent and informative descriptions of artworks from images and titles, or from images alone.
17
-
18
- ### Dataset Description
19
-
20
- <!-- Provide a longer summary of what this dataset is. -->
21
-
22
-
23
 
24
- - **Curated by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Language(s) (NLP):** [More Information Needed]
28
- - **License:** [More Information Needed]
29
 
30
- ### Dataset Sources [optional]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
- <!-- Provide the basic links for the dataset. -->
33
 
34
- - **Repository:** [More Information Needed]
35
- - **Paper [optional]:** [More Information Needed]
36
- - **Demo [optional]:** [More Information Needed]
 
 
37
 
38
- ## Uses
39
 
40
- <!-- Address questions around how the dataset is intended to be used. -->
41
 
42
- ### Direct Use
43
 
44
- <!-- This section describes suitable use cases for the dataset. -->
45
 
46
  [More Information Needed]
47
 
48
- ### Out-of-Scope Use
49
-
50
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
51
-
52
- [More Information Needed]
53
 
54
  ## Dataset Structure
55
-
56
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
57
-
58
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
  ## Dataset Creation
61
 
 
 
 
62
  ### Curation Rationale
63
 
64
- <!-- Motivation for the creation of this dataset. -->
65
 
66
- [More Information Needed]
67
 
68
  ### Source Data
 
 
 
 
 
69
 
70
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
 
 
 
 
 
 
 
 
 
71
 
72
- #### Data Collection and Processing
 
73
 
74
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
75
 
76
- [More Information Needed]
77
 
78
- #### Who are the source data producers?
 
 
79
 
80
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
 
 
 
 
 
 
 
81
 
82
- [More Information Needed]
 
 
83
 
84
- ### Annotations [optional]
 
85
 
86
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
 
87
 
88
- #### Annotation process
89
 
90
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
91
 
92
  [More Information Needed]
93
 
94
- #### Who are the annotators?
 
95
 
96
- <!-- This section describes the people or systems who created the annotations. -->
97
 
98
- [More Information Needed]
99
 
100
- #### Personal and Sensitive Information
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
103
 
104
- [More Information Needed]
 
105
 
106
- ## Bias, Risks, and Limitations
107
 
108
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
109
 
110
  [More Information Needed]
111
 
112
- ### Recommendations
113
-
114
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
115
-
116
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
117
 
118
- ## Citation [optional]
119
-
120
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
121
-
122
- **BibTeX:**
123
-
124
- [More Information Needed]
125
-
126
- **APA:**
127
 
128
  [More Information Needed]
129
 
130
- ## Glossary [optional]
 
 
 
131
 
132
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
133
 
134
- [More Information Needed]
135
 
136
- ## More Information [optional]
 
137
 
138
- [More Information Needed]
139
 
140
- ## Dataset Card Authors [optional]
141
 
142
  [More Information Needed]
143
 
144
- ## Dataset Card Contact
145
-
146
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
12
 
 
 
 
 
 
 
 
 
 
 
13
 
 
 
 
 
 
14
 
15
+ # Dataset Card for "Wiki-ImageReview1.0"
16
+
17
+ ## Table of Contents
18
+ - [Table of Contents](#table-of-contents)
19
+ - [Dataset Description](#dataset-description)
20
+ - [Dataset Summary](#dataset-summary)
21
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
22
+ - [Languages](#languages)
23
+ - [Dataset Structure](#dataset-structure)
24
+ - [Data Instances](#data-instances)
25
+ - [Data Fields](#data-fields)
26
+ - [Data Splits](#data-splits)
27
+ - [Dataset Creation](#dataset-creation)
28
+ - [Curation Rationale](#curation-rationale)
29
+ - [Source Data](#source-data)
30
+ - [Annotations](#annotations)
31
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
32
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
33
+ - [Social Impact of Dataset](#social-impact-of-dataset)
34
+ - [Discussion of Biases](#discussion-of-biases)
35
+ - [Other Known Limitations](#other-known-limitations)
36
+ - [Additional Information](#additional-information)
37
+ - [Dataset Curators](#dataset-curators)
38
+ - [Licensing Information](#licensing-information)
39
+ - [Citation Information](#citation-information)
40
+ - [Contributions](#contributions)
41
+
42
+ ## Dataset Description
43
+
44
+ - **Homepage:**
45
+ - **Repository:https://github.com/naist-nlp/Hackathon-2023-Summer**
46
+ - **Paper:**
47
+ - **Leaderboard:**
48
+ - **Point of Contact:**
49
 
 
50
 
51
+ ## Dataset Summary
52
+ >Explain Artworks: ExpArt is designed to enhance the capabilities of large-scale vision-language models (LVLMs) in analyzing and describing artworks.
53
+ >Drawing from a comprehensive array of English Wikipedia art articles, the dataset encourages LVLMs to create in-depth descriptions based on images with or without accompanying titles.
54
+ >This endeavor aims to improve LVLMs' proficiency in discerning and articulating the historical and thematic nuances of art. Explain Artworks: ExpArt not only aims to elevate AI's understanding and critique of art but also seeks to forge a stronger connection between artificial intelligence and art history.
55
+ >With approximately 10,000 articles, the dataset introduces specialized metrics for assessing the effectiveness of LVLMs in art explanation, focusing on their interpretation of visual and textual cues.
56
 
 
57
 
 
58
 
 
59
 
60
+ ### Supported Tasks and Leaderboards
61
 
62
  [More Information Needed]
63
 
64
+ ### Languages
65
+ This dataset is available in English.
 
 
 
66
 
67
  ## Dataset Structure
68
+ The structure of the raw dataset is as follows:
69
+
70
+ ```JSON
71
+ {
72
+ "id": "0001_T",
73
+ "title": "Mona Lisa",
74
+ "conversations": [
75
+ {
76
+ "from": "user",
77
+ "value": "<img>/images/Mona Lisa.jpg</img>\nFocus on Mona Lisa and explore the history."
78
+ },
79
+ {
80
+ "from": "assistant",
81
+ "value": "Of Leonardo da Vinci’s works, the Mona Lisa is the only portrait whose authenticity...."
82
+ }
83
+ ]
84
+ }
85
+ ```
86
+ ```JSON
87
+ {
88
+ "id": "0001_NT",
89
+ "conversations": [
90
+ {
91
+ "from": "user",
92
+ "value": "<img>/images/Mona Lisa.jpg</img>\nFocus on this artwork and explore the history."
93
+ },
94
+ {
95
+ "from": "assistant",
96
+ "value": "Of Leonardo da Vinci’s works, the Mona Lisa is the only portrait whose authenticity...."
97
+ }
98
+ ]
99
+ }
100
+ ```
101
+
102
+
103
+ ### Data Instances
104
+ To load datasets, you must specify a language.
105
+
106
+
107
+ ### English Example
108
+ ```Python
109
+ from datasets import load_dataset
110
+
111
+ dataset = load_dataset("naist-nlp/Wiki-ImageReview1.0", 'en')
112
+
113
+ print(dataset)
114
+ # DatasetDict({
115
+ # train: Dataset({
116
+ # features: ['id', 'image', 'image_url', 'genre', 'sentence_1', 'sentence_2', 'sentence_3', 'sentence_4', 'sentence_5', 'annotator_1', 'annotator_2', 'annotator_3', 'best_pair', 'best_pair_rho'],
117
+ # num_rows: 207
118
+ # })
119
+ # })
120
+ ```
121
+
122
+ ### Japanese Example
123
+ ```Python
124
+ from datasets import load_dataset
125
+
126
+ dataset = load_dataset("naist-nlp/Wiki-ImageReview1.0", 'ja')
127
+ ```
128
+
129
+ An example of the English dataset is as follows:
130
+
131
+ ```JSON
132
+ {
133
+ "id": "001",
134
+ "image": <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=242x300 at 0x7F9D26ED47C0>,
135
+ "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Ardea_picata.jpg/242px-Ardea_picata.jpg",
136
+ "genre": "Animals",
137
+ "sentence_1": "This photograph captures the...",
138
+ "sentence_2": "The photographer has done...",
139
+ "sentence_3": "While the clarity of the image is...",
140
+ "sentence_4": "I believe the image fails to...",
141
+ "sentence_5": "The photograph stunningly showcases...",
142
+ "annotator_1": [1, 3, 4, 5, 2],
143
+ "annotator_2": [3, 1, 4, 5, 2],
144
+ "annotator_3": [1, 2, 3, 4, 5],
145
+ "best_pair": ["annotator_1", "annotator_3"],
146
+ "best_pair_rho": 0.4000000059604645
147
+ }
148
+ ```
149
+
150
+
151
+
152
+ ### Data Fields
153
+
154
+ - id: Unique ID for each pair of an image and its review.
155
+ - image: The image itself.
156
+ - image_url: URL from which the image was retrieved.
157
+ - genre: The genre to which the image belongs.
158
+ - sentence_[1-5]: Review sentences generated by GPT-4V, rated from 1 (best) to 5 (worst) as a review.
159
+ - annotator_[1-3]: Rankings of the review sentences in Good Order by annotators 1 to 3.
160
+ - "best_pair": [Information Needed]
161
+ - "best_pair_rho": [Information Needed]
162
+
163
+ ### Data Splits
164
+
165
+ | Language | Language code | Size |
166
+ | --: | :---------- | :---------- |
167
+ | English | en | 207 |
168
+ | Japanese | ja | 207 |
169
 
170
  ## Dataset Creation
171
 
172
+ > Our dataset construction process consists of the following four steps;
173
+ > (1)Collecting images, (2)Generating five review texts, (3)Ranking review texts manually and (4)Filtering low-quality data.
174
+
175
  ### Curation Rationale
176
 
 
177
 
178
+
179
 
180
  ### Source Data
181
+ - #### Source of Image
182
+ >The images are collected from the "Featured pictures" section of English Wikipedia.
183
+ >This section is composed of images, such as photographs, illustrations, and diagrams selected by user votes.
184
+ >The image data contained in this section is of very high quality and covers a diverse range of genres including artwork, natural landscapes, historical events, and science.
185
+ >We therefore select it as the image source.
186
 
187
+ ><details><summary>Genre(number of images)</summary>
188
+ >
189
+ >```
190
+ >Animals (15) / Artwork (15) / Culture, entertainment, and lifestyle (15) /
191
+ >Currency (15) / Diagrams, drawings, and maps (15) /
192
+ >Engineering and technology (15) / History (15) / Natural phenomena (15) /
193
+ >People (15) / Places (15) / Plants (15) / Sciences (15) / Space (15) /
194
+ >Vehicles (15) / Other lifeforms (15) / Other (15)
195
+ >```
196
+ ></details>
197
 
198
+ - #### Source of review
199
+ > Five review texts are generated for each image by using GPT-4V in English and Japanese.
200
 
 
201
 
202
+ #### Initial Data Collection and Normalization
203
 
204
+ - #### Ganaration Prompt
205
+ >we formulate a prompt specifically designed to underscore distinctions.
206
+ >This prompt is tailored to generate five distinct review texts, each uniquely characterized by their degree of reasonableness and objectivity.  
207
 
208
+ >Prompt:
209
+ >Please describe five different review texts about the good points and room for improvement of the image, following the constraints below:
210
+ >1.Each review text should have different content.
211
+ >2.The length of each review text should be almost the same.
212
+ >3.Do not include bullet points within the review texts.
213
+ >4.The review texts should be described in the following order: "Objective and reasonable," "Subjective but reasonable," "Objective but unreasonable," "Subjective and unreasonable," and "Subjective and containing an error".
214
+ >5.Each review text should describe both the good points and room for improvement of the image.
215
+ >6.If the image has no room for improvement, explicitly state that within the review text.
216
 
217
+ - #### Removing contradictory expressions
218
+ >a generated text sometimes ends with contradictory expressions that negate itself, such as "Note: the review contains an error as the stars are not blurred in the image provided.
219
+ >We check these phrases and remove them manually.
220
 
221
+ - #### Ranking review texts manually
222
+ >The five review texts of each image are manually ranked by $X$~($\geq3$) annotators.
223
 
224
+ - #### Filtering low-quality data
225
+ >we measure rank correlations among annotators and conduct filtering by setting a threshold on the rank correlation of the pair of annotators with the highest correlation.
226
 
 
227
 
228
+ #### Who are the source language producers?
229
 
230
  [More Information Needed]
231
 
232
+ ### Annotations
233
+ > The evaluation method consists of the following two steps;(1)Ranking review texts by LVLM and (2)Measuring rank correlation between LVLM and humans.
234
 
 
235
 
236
+ #### Annotation process
237
 
238
+ - #### Ranking review texts by LVLM
239
+ - perplexity-based ranking
240
+ >We employ perplexity as the evaluation metric for ranking review texts by LVLM.
241
+ >We compute perplexity by inputting both the image and its corresponding review text, along with a prompt described
242
+
243
+ >`Prompt:
244
+ >Please describe a review text about the good points and room for improvement of the image`
245
+
246
+ - response-based ranking
247
+ >In some LVLMs like GPT-4V, calculating perplexity is not straightforward.
248
+ >Therefore, we also consider a method of directly ranking with a Prompt.
249
+
250
+ >Prompt:
251
+ >Below are the images and their review texts. Please rank the review text of each image from 1 to 5, in order of appropriateness. Please note that the numbers from 1 to 5 are not scores but rankings, and the smaller the number, the more appropriate it is. There should be no ties, and each rank from 1 to 5 should always appear once.
252
+ >Please judge the appropriateness by the following aspects in the following order. That is, first, rank the texts by truthfulness. If there are equally truthful texts, rank them by consistency. Similarly, if they are equal also in consistency, rank them by informativeness; if they are equal also in it, rank them by objectivity; if they are equal also in it, rank them by fluency.
253
+ >1. Truthfulness: Is it free of false information?
254
+ >2. Consistency: Does it correspond to the image?
255
+ >3. Informativeness: Does it describe detailed information or features of the image?
256
+ >4. Objectivity: Is it an objective description?
257
+ >5. Fluency: Is it grammatically correct?
258
+ >If the text contains unfamiliar information, you may use a dictionary or search engine. However, please do not use a generative AI such as ChatGPT or image search.
259
+ >Do not include the reason for rankingAbsolutely respond in the following format.text1:2nd place, text2:3rd place, text3:1st place, text4:5th place, text5:4th place
260
+
261
+ - #### Measuring rank correlation between LVLM and humans
262
+ >The rank correlation between top-correlated annotators and an LVLM is measured using the procedure
263
+ >![代替テキスト](画像のURL "画像タイトル")
264
 
 
265
 
266
+ #### Who are the annotators?
267
+ >The English data were ranked by three native and near-native English speakers, whereas the Japanese data were ranked by three native Japanese speakers.
268
 
 
269
 
270
+ ### Personal and Sensitive Information
271
 
272
  [More Information Needed]
273
 
274
+ ## Considerations for Using the Data
275
+ >While the proposed method emphasizes consistency and objectivity in assessing image review capabilities of LVLM, it does not evaluate from the perspective of domain knowledge, which remains a challenge for future work.
 
 
 
276
 
277
+ ### Social Impact of Dataset
 
 
 
 
 
 
 
 
278
 
279
  [More Information Needed]
280
 
281
+ ### Discussion of Biases
282
+ > However, as acknowledged on its official pages[(1,](https://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view\#Bias_in_sources)[ 2)](https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources\#Biased_or_opinionated_sources),
283
+ > the present English Wikipedia allows the inclusion of information from sources that may be biased.
284
+ > Consequently, the dataset we developed might also reflect the inherent biases of the English Wikipedia.
285
 
 
286
 
287
+ ### Other Known Limitations
288
 
289
+ >In this study, our dataset was created using images obtained from English Wikipedia. The editors of English Wikipedia remove unnecessarily aggressive content, and we also excluded images involving political issues and other sensitive topics from our dataset.
290
+ >However, as acknowledged on its official pages, the present English Wikipedia allows the inclusion of information from sources that may be biased. Consequently, the dataset we developed might also reflect the inherent biases of the English Wikipedia.
291
 
292
+ ## Additional Information
293
 
294
+ ### Dataset Curators
295
 
296
  [More Information Needed]
297
 
298
+ ### Licensing Information
299
+ For licensing information, please refer to the licenses of the specific data subsets you utilize.
300
+
301
+ [Wikipedia License](https://en.wikipedia.org/wiki/Wikipedia:Copyrights)
302
+ [OpenAI Terms of use](https://openai.com/policies/terms-of-use)
303
+
304
+ ### Citation Information
305
+ To cite this work, please use the following format:
306
+ ```
307
+ @software{Wiki-ImageReview1.0,
308
+ author = {naist-nlp},
309
+ title = {Vision Language Model が持つ画像批評能力の評価用データセット},
310
+ year = {2024},
311
+ url = {https://github.com/naist-nlp/Hackathon-2023-Summer}
312
+ }
313
+ ```
314
+ ### Contributions
315
+
316
+ Thanks to [@github-username](#https://github.com/<github-username>) for adding this dataset.