jasoncorkill's picture
Update README.md
33b6c5e verified
|
raw
history blame
7.7 kB
metadata
license: apache-2.0
dataset_info:
  features:
    - name: image
      dtype: image
    - name: prompt
      dtype: string
    - name: word_scores
      dtype: string
    - name: alignment_score_norm
      dtype: float32
    - name: coherence_score_norm
      dtype: float32
    - name: style_score_norm
      dtype: float32
    - name: alignment_heatmap
      sequence:
        sequence: float16
    - name: coherence_heatmap
      sequence:
        sequence: float16
    - name: alignment_score
      dtype: float32
    - name: coherence_score
      dtype: float32
    - name: style_score
      dtype: float32
  splits:
    - name: train
      num_bytes: 25257389633.104
      num_examples: 13024
  download_size: 17856619960
  dataset_size: 25257389633.104
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-to-image
  - text-classification
  - image-classification
  - image-to-text
  - image-segmentation
language:
  - en
tags:
  - t2i
  - preferences
  - human
  - flux
  - midjourney
  - imagen
  - dalle
  - heatmap
  - coherence
  - alignment
  - style
  - plausiblity
pretty_name: Rich Human Feedback for Text to Image Models
size_categories:
  - 1M<n<10M
Rapidata Logo

Building upon Google's research Rich Human Feedback for Text-to-Image Generation we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the Python API. Collection took roughly 5 days.

Overview

We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were asked to identify specific problematic areas. Additionally, for all images, participants identified words from the prompts that were not accurately represented in the generated images.

Word Scores

Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.

Examples Results:

Coherence

The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received 21 responses, which were aggregated on a scale of 1-5.

Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.

Example Results:

Alignment

The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?". The final score is calculated on a scale of 1-5 by aggregating 21 responses per prompt-image pair.

For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.

As mentioned in the google paper, aligment is harder to annotate consistently, if e.g. an object is missing, it is unclear to the annotators what they need to highlight.

Example Results:

Prompt: Three cats and one dog sitting on the grass.
Three cats and one dog
Prompt: A brown toilet with a white wooden seat.
Brown toilet
Prompt: Photograph of a pale Asian woman, wearing an oriental costume, sitting in a luxurious white chair. Her head is floating off the chair, with the chin on the table and chin on her knees, her chin on her knees. Closeup
Asian woman in costume
Prompt: A tennis racket underneath a traffic light.
Racket under traffic light

Style

The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses, which were aggregated on a scale of 1-5. In contrast to other prefrence collection methods, such as the huggingface image arena, the preferences were collected from humans from around the world (156 different countries) from all walks of life, creating a more representative score.

About Rapidata

Rapidata's technology makes collecting human feedback at scale faster and more accessible than ever before. Visit rapidata.ai to learn more about how we're revolutionizing human feedback collection for AI development.