maalber commited on
Commit
d3628c9
·
verified ·
1 Parent(s): a030986

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -4
README.md CHANGED
@@ -81,7 +81,7 @@ Users identified words from the prompts that were NOT accurately depicted in the
81
 
82
 
83
  # Coherence
84
- The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received 21 responses, which were aggregated on a scale of 1-5.
85
 
86
  Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
87
 
@@ -93,7 +93,7 @@ Images scoring below 3.8 in coherence were further evaluated, with participants
93
 
94
 
95
  # Alignment
96
- The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". The final score is calculated on a scale of 1-5 by aggregating 21 responses per prompt-image pair.
97
 
98
  For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
99
 
@@ -179,7 +179,8 @@ As mentioned in the google paper, aligment is harder to annotate consistently, i
179
 
180
 
181
  # Style
182
- The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses, which were aggregated on a scale of 1-5. In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.
 
183
 
184
  # About Rapidata
185
  Rapidata's technology makes collecting human feedback at scale faster and more accessible than ever before. Visit [rapidata.ai](https://www.rapidata.ai/) to learn more about how we're revolutionizing human feedback collection for AI development.
@@ -190,4 +191,128 @@ We run a benchmark of the major image generation models, the results can be foun
190
  - Link to the [Text-2-Image Alignment dataset](https://huggingface.co/datasets/Rapidata/Flux_SD3_MJ_Dalle_Human_Alignment_Dataset)
191
  - Link to the [Preference dataset](https://huggingface.co/datasets/Rapidata/700k_Human_Preference_Dataset_FLUX_SD3_MJ_DALLE3)
192
 
193
- We have also started to run a [video generation benchmark](https://www.rapidata.ai/leaderboard/video-models), it is still a work in progress and currently only covers 2 models. They are also analysed in coherence/plausiblity, alignment and style preference.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81
 
82
 
83
  # Coherence
84
+ The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received at least 21 responses indicating the level of coherence on a scale of 1-5, which were then averaged to produce the final scores where 5 indicates the highest coherence.
85
 
86
  Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
87
 
 
93
 
94
 
95
  # Alignment
96
+ The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". Again, each image received at least 21 responses indicating the level of alignment on a scale of 1-5 (5 being the highest), which were then averaged.
97
 
98
  For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
99
 
 
179
 
180
 
181
  # Style
182
+ The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses grading on a scale of 1-5, which were then averaged.
183
+ In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.
184
 
185
  # About Rapidata
186
  Rapidata's technology makes collecting human feedback at scale faster and more accessible than ever before. Visit [rapidata.ai](https://www.rapidata.ai/) to learn more about how we're revolutionizing human feedback collection for AI development.
 
191
  - Link to the [Text-2-Image Alignment dataset](https://huggingface.co/datasets/Rapidata/Flux_SD3_MJ_Dalle_Human_Alignment_Dataset)
192
  - Link to the [Preference dataset](https://huggingface.co/datasets/Rapidata/700k_Human_Preference_Dataset_FLUX_SD3_MJ_DALLE3)
193
 
194
+ We have also started to run a [video generation benchmark](https://www.rapidata.ai/leaderboard/video-models), it is still a work in progress and currently only covers 2 models. They are also analysed in coherence/plausiblity, alignment and style preference.
195
+
196
+ # Replicating the Annotation Setup
197
+ For researchers interested in producing their own rich preference dataset, you can directly use the Rapidata API through python. The code snippets below show how to replicate the modalities used in the dataset. Additional information is available through the [documentation](https://docs.rapidata.ai/)
198
+
199
+ <details>
200
+ <summary>Creating the Rapidata Client and Downloading the Dataset</summary>
201
+ First install the rapidata package, then create the RapidataClient() this will be used create and launch the annotation setup
202
+
203
+ ```bash
204
+ pip install rapidata
205
+ ```
206
+
207
+ ```python
208
+ from rapidata import RapidataClient, LabelingSelection, ValidationSelection
209
+
210
+ client = RapidataClient()
211
+ ```
212
+
213
+ As example data we will just use images from the dataset. Make sure to set `streaming=True` as downloading the whole dataset might take a significant amount of time.
214
+
215
+ ```python
216
+ from datasets import load_dataset
217
+
218
+ ds = load_dataset("Rapidata/text-2-image-Rich-Human-Feedback", split="train", streaming=True)
219
+ ds = ds.select_columns(["image","prompt"])
220
+ ```
221
+
222
+ Since we use streaming, we can extract the prompts and download the images we need like this:
223
+
224
+ ```python
225
+ import os
226
+ tmp_folder = "demo_images"
227
+
228
+
229
+ # make folder if it doesn't exist
230
+ if not os.path.exists(tmp_folder):
231
+ os.makedirs(tmp_folder)
232
+
233
+
234
+ prompts = []
235
+ image_paths = []
236
+ for i, row in enumerate(ds.take(10)):
237
+ prompts.append(row["prompt"])
238
+ # save image to disk
239
+ save_path = os.path.join(tmp_folder, f"{i}.jpg")
240
+ row["image"].save(save_path)
241
+ image_paths.append(save_path)
242
+ ```
243
+ </details>
244
+
245
+ <details>
246
+ <summary>Likert Scale Alignment Score</summary>
247
+ To launch a likert scale annotation order, we make use of the classification annotation modality. Below we show the setup for the alignment criteria.
248
+ The structure is the same for style and coherence, however arguments have to be adjusted of course. I.e. different instructions, options and validation set.
249
+
250
+ ```python
251
+ # Alignment Example
252
+ instruction = "How well does the image match the description?"
253
+ answer_options = [
254
+ "1: Not at all",
255
+ "2: A little",
256
+ "3: Moderately",
257
+ "4: Very well",
258
+ "5: Perfectly"
259
+ ]
260
+
261
+ order = client.order.create_classification_order(
262
+ name="Alignment Example",
263
+ instruction=instruction,
264
+ answer_options=answer_options,
265
+ datapoints=image_paths,
266
+ contexts=prompts, # for alignment, prompts are required as context for the annotators.
267
+ responses_per_datapoint=10,
268
+ selections=[ValidationSelection("676199a5ef7af86285630ea6"), LabelingSelection(1)] # here we use a pre-defined validation set. See https://docs.rapidata.ai/improve_order_quality/ for details
269
+ )
270
+
271
+ order.run() # This starts the order. Follow the printed link to see progress.
272
+ ```
273
+ </details>
274
+
275
+ <details>
276
+ <summary>Alignment Heatmap</summary>
277
+ To produce heatmaps, we use the locate annotation modality. Below is the setup used for creating the alignment heatmaps.
278
+
279
+ ```python
280
+ # alignment heatmap
281
+ # Note that the selected images may not actually have severely misaligned elements, but this is just for demonstration purposes.
282
+
283
+ order = client.order.create_locate_order(
284
+ name="Alignment Heatmap Example",
285
+ instruction="What part of the image does not match with the description? Tap to select.",
286
+ datapoints=image_paths,
287
+ contexts=prompts, # for alignment, prompts are required as context for the annotators.
288
+ responses_per_datapoint=10,
289
+ selections=[ValidationSelection("67689e58026456ec851f51f8"), LabelingSelection(1)] # here we use a pre-defined validation set for alignment. See https://docs.rapidata.ai/improve_order_quality/ for details
290
+ )
291
+
292
+ order.run() # This starts the order. Follow the printed link to see progress.
293
+ ```
294
+ </details>
295
+
296
+ <details>
297
+ <summary>Select Misaligned Words</summary>
298
+ To launch the annotation setup for selection of misaligned words, we used the following setup
299
+
300
+ ```python
301
+ # Select words example
302
+
303
+ from rapidata import LanguageFilter
304
+
305
+ select_words_prompts = [p + " [No_Mistake]" for p in prompts]
306
+ order = client.order.create_select_words_order(
307
+ name="Select Words Example",
308
+ instruction = "The image is based on the text below. Select mistakes, i.e., words that are not aligned with the image.",
309
+ datapoints=image_paths,
310
+ sentences=select_words_prompts,
311
+ responses_per_datapoint=10,
312
+ filters=[LanguageFilter(["en"])], # here we add a filter to ensure only english speaking annotators are selected
313
+ selections=[ValidationSelection("6761a86eef7af86285630ea8"), LabelingSelection(1)] # here we use a pre-defined validation set. See https://docs.rapidata.ai/improve_order_quality/ for details
314
+ )
315
+
316
+ order.run()
317
+ ```
318
+ </details>