Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -71,6 +71,8 @@ Building upon Google's research [Rich Human Feedback for Text-to-Image Generatio
|
|
71 |
# Overview
|
72 |
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were asked to identify specific problematic areas. Additionally, for all images, participants identified words from the prompts that were not accurately represented in the generated images.
|
73 |
|
|
|
|
|
74 |
# Word Scores
|
75 |
Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
|
76 |
|
|
|
71 |
# Overview
|
72 |
We asked humans to evaluate AI-generated images in style, coherence and prompt alignment. For images that contained flaws, participants were asked to identify specific problematic areas. Additionally, for all images, participants identified words from the prompts that were not accurately represented in the generated images.
|
73 |
|
74 |
+
If you want to replicate the annotation setup, the steps are outlined at the bottom.
|
75 |
+
|
76 |
# Word Scores
|
77 |
Users identified words from the prompts that were NOT accurately depicted in the generated images. Higher word scores indicate poorer representation in the image. Participants also had the option to select "[No_mistakes]" for prompts where all elements were accurately depicted.
|
78 |
|