maalber commited on
Commit
2d31689
·
verified ·
1 Parent(s): f65e504

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -6
README.md CHANGED
@@ -75,7 +75,121 @@ If you want to replicate the annotation setup, the steps are outlined at the [bo
75
 
76
  This dataset and the annotation process is described in further detail in our blog post [Beyond Image Preferences](https://huggingface.co/blog/RapidataAI/beyond-image-preferences).
77
 
78
- # Word Scores
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
80
 
81
  ### Examples Results:
@@ -84,7 +198,7 @@ Users identified words from the prompts that were NOT accurately depicted in the
84
  | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/4uWKVjZBA5aX2YDUYNpdV.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/f9JIuwDoNohy7EkDYILFm.png" width="500"> |
85
 
86
 
87
- # Coherence
88
  The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received at least 21 responses indicating the level of coherence on a scale of 1-5, which were then averaged to produce the final scores where 5 indicates the highest coherence.
89
 
90
  Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
@@ -96,7 +210,7 @@ Images scoring below 3.8 in coherence were further evaluated, with participants
96
  | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/mRDdoQdc4_iy2JcLhdI7J.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/2N2KJyz4YOGT6N6tuUX8M.png" width="500"> |
97
 
98
 
99
- # Alignment
100
  The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". Again, each image received at least 21 responses indicating the level of alignment on a scale of 1-5 (5 being the highest), which were then averaged.
101
 
102
  For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
@@ -105,7 +219,6 @@ As mentioned in the google paper, aligment is harder to annotate consistently, i
105
 
106
  ### Example Results:
107
 
108
-
109
  <style>
110
  .example-results-grid {
111
  display: grid;
@@ -181,8 +294,7 @@ As mentioned in the google paper, aligment is harder to annotate consistently, i
181
  </div>
182
  </div>
183
 
184
-
185
- # Style
186
  The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses grading on a scale of 1-5, which were then averaged.
187
  In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.
188
 
 
75
 
76
  This dataset and the annotation process is described in further detail in our blog post [Beyond Image Preferences](https://huggingface.co/blog/RapidataAI/beyond-image-preferences).
77
 
78
+ # Usage Examples
79
+ Accessing this data is easy with the Huggingface `dataset` library. For quick demos or previews, we recommend setting `streaming=True` as downloading the whole dataset can take a while.
80
+
81
+ ```python
82
+ from datasets import load_dataset
83
+
84
+ ds = load_dataset("Rapidata/text-2-image-Rich-Human-Feedback", split="train", streaming=True)
85
+ ```
86
+
87
+ As an example, below we show how to replicate the figures below.
88
+
89
+ <details>
90
+ <summary>Click to expand Select Words example</summary>
91
+ The methods below can be used to produce figures similar to the ones shownn below.
92
+ Note however that the figures below were created using `matplotlib`, however we opt to use `opencv` here as it makes calculating the text spacing much easier.
93
+
94
+ **Methods**
95
+ ```python
96
+ from PIL import Image
97
+ from datasets import load_dataset
98
+ import cv2
99
+ import numpy as np
100
+
101
+ def get_colors(words):
102
+ colors = []
103
+ for item in words:
104
+ intensity = item / max(words)
105
+ value = np.uint8((1 - intensity) * 255)
106
+ color = tuple(map(int, cv2.applyColorMap(np.array([[value]]), cv2.COLORMAP_AUTUMN)[0][0]))
107
+ colors.append(color)
108
+ return colors
109
+
110
+ def get_wrapped_text(text_color_pairs, font, font_scale, thickness, word_spacing, max_width):
111
+ wrapped_text_color_pairs, current_line, line_width = [], [], 0
112
+ for text, color in text_color_pairs:
113
+ text_size = cv2.getTextSize(text, font, font_scale, thickness)[0]
114
+ if line_width + text_size[0] > max_width:
115
+ wrapped_text_color_pairs.append(current_line)
116
+ current_line, line_width = [], 0
117
+ current_line.append((text, color, text_size))
118
+ line_width += text_size[0] + word_spacing
119
+ wrapped_text_color_pairs.append(current_line)
120
+ return wrapped_text_color_pairs
121
+
122
+ def add_multicolor_text(input, text_color_pairs, font_scale=1, thickness=2, word_spacing=20):
123
+ image = cv2.cvtColor(np.array(input), cv2.COLOR_RGB2BGR)
124
+ image_height, image_width, _ = image.shape
125
+
126
+ font = cv2.FONT_HERSHEY_SIMPLEX
127
+ wrapped_text = get_wrapped_text(text_color_pairs, font, font_scale, thickness, word_spacing, int(image_width*0.95))
128
+
129
+ position = (int(0.025*image_width), int(word_spacing*2))
130
+
131
+ overlay = image.copy()
132
+ cv2.rectangle(overlay, (0, 0), (image_width, int((len(wrapped_text)+1)*word_spacing*2)), (100,100,100), -1)
133
+ out_img = cv2.addWeighted(overlay, 0.75, image, 0.25, 0)
134
+
135
+ for idx, text_line in enumerate(wrapped_text):
136
+ current_x, current_y = position[0], position[1] + int(idx*word_spacing*2)
137
+ for text, color, text_size in text_line:
138
+ cv2.putText(out_img, text, (current_x, current_y), font, font_scale, color, thickness)
139
+ current_x += text_size[0] + word_spacing
140
+
141
+ return Image.fromarray(cv2.cvtColor(out_img, cv2.COLOR_BGR2RGB))
142
+ ```
143
+ **Create figures**
144
+ ```python
145
+ ds_words = ds.select_columns(["image","prompt", "word_scores"])
146
+
147
+ for example in ds_words.take(5):
148
+ image = example["image"]
149
+ prompt = example["prompt"]
150
+ word_scores = [s[1] for s in eval(example["word_scores"])]
151
+ words = [s[0] for s in eval(example["word_scores"])]
152
+ colors = get_colors(word_scores)
153
+ display(add_multicolor_text(image, list(zip(words, colors)), font_scale=1, thickness=2, word_spacing=20))
154
+ ```
155
+ </details>
156
+
157
+ <details>
158
+ <summary>Click to expand Heatmap example</summary>
159
+
160
+ **Methods**
161
+ ```python
162
+ import cv2
163
+ import numpy as np
164
+ from PIL import Image
165
+
166
+ def overlay_heatmap(image, heatmap, alpha=0.3):
167
+ cv2_image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR)
168
+ heatmap_normalized = ((heatmap - heatmap.min()) / (heatmap.max() - heatmap.min()))
169
+ heatmap_normalized = np.uint8(255 * (heatmap_normalized))
170
+ heatmap_colored = cv2.applyColorMap(heatmap_normalized, cv2.COLORMAP_HOT)
171
+ overlaid_image = cv2.addWeighted(cv2_image, 1 - alpha, heatmap_colored, alpha, 0)
172
+
173
+ return Image.fromarray(cv2.cvtColor(overlaid_image, cv2.COLOR_BGR2RGB))
174
+ ```
175
+ **Create figures**
176
+ ```python
177
+ ds_heatmap = ds.select_columns(["image","prompt", "alignment_heatmap"])
178
+
179
+ for example in ds_heatmap.take(5):
180
+ image = example["image"]
181
+ heatmap = example["alignment_heatmap"]
182
+ if heatmap:
183
+ display(overlay_heatmap(image, np.asarray(heatmap)))
184
+ ```
185
+
186
+ </details>
187
+
188
+ </br>
189
+
190
+ # Data Summary
191
+
192
+ ## Word Scores
193
  Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
194
 
195
  ### Examples Results:
 
198
  | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/4uWKVjZBA5aX2YDUYNpdV.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/f9JIuwDoNohy7EkDYILFm.png" width="500"> |
199
 
200
 
201
+ ## Coherence
202
  The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received at least 21 responses indicating the level of coherence on a scale of 1-5, which were then averaged to produce the final scores where 5 indicates the highest coherence.
203
 
204
  Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
 
210
  | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/mRDdoQdc4_iy2JcLhdI7J.png" width="500"> | <img src="https://cdn-uploads.huggingface.co/production/uploads/672b7d79fd1e92e3c3567435/2N2KJyz4YOGT6N6tuUX8M.png" width="500"> |
211
 
212
 
213
+ ## Alignment
214
  The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". Again, each image received at least 21 responses indicating the level of alignment on a scale of 1-5 (5 being the highest), which were then averaged.
215
 
216
  For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
 
219
 
220
  ### Example Results:
221
 
 
222
  <style>
223
  .example-results-grid {
224
  display: grid;
 
294
  </div>
295
  </div>
296
 
297
+ ## Style
 
298
  The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses grading on a scale of 1-5, which were then averaged.
299
  In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.
300