Datasets:
license: apache-2.0
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: word_scores
dtype: string
- name: alignment_score
dtype: float32
- name: coherence_score
dtype: float32
- name: style_score
dtype: float32
- name: alignment_heatmap
sequence:
sequence: float16
- name: coherence_heatmap
sequence:
sequence: float16
splits:
- name: train
num_bytes: 13690247160.8
num_examples: 6550
download_size: 9033856469
dataset_size: 13690247160.8
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-image
- text-classification
- image-classification
- image-to-text
- image-segmentation
language:
- en
tags:
- t2i
- preferences
- human
- flux
- midjourney
- imagen
- dalle
- heatmap
- coherence
- alignment
- style
- plausiblity
pretty_name: Rich Human Feedback for Text to Image Models
size_categories:
- 1M<n<10M
Building upon Google's research Rich Human Feedback for Text-to-Image Generation we have collected over 1.5 million responses from 152'684 individual humans using Rapidata via the Python API
Overview
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were asked to identify specific problematic areas. Additionally, for all images, participants identified words from the prompts that were not accurately represented in the generated images.
Word Scores
Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
Examples Results:
Coherence
The coherence score measures whether the generated image is logically consistent and free from artifacts or visual glitches. Without seeing the original prompt, users were asked: "Look closely, does this image have weird errors, like senseless or malformed objects, incomprehensible details, or visual glitches?" Each image received 21 responses, which were aggregated on a scale of 1-5.
Images scoring below 3.8 in coherence were further evaluated, with participants marking specific errors in the image.
Example Results:
Alignment
The alignment score quantifies how well an image matches its prompt. Users were asked: "How well does the image match the description?" The final score is calculated on a scale of 1-5 by aggregating 21 responses.
For images with an alignment score below 3.2, additional users were asked to highlight areas where the image did not align with the prompt. These responses were then compiled into a heatmap.
Example Results:
Style
The style score reflects how visually appealing participants found each image, independent of the prompt. Users were asked: "How much do you like the way this image looks?" Each image received 21 responses, which were aggregated on a scale of 1-5.