Understanding Generative AI Capabilities in Everyday Image Editing Tasks
Abstract
Analysis of 83k image editing requests reveals that AI editors, including GPT-4o, struggle with low-creativity tasks and precise editing, while performing better on open-ended tasks, and human and VLM judges differ in their preferences for AI versus human edits.
Generative AI (GenAI) holds significant promise for automating everyday image editing tasks, especially following the recent release of GPT-4o on March 25, 2025. However, what subjects do people most often want edited? What kinds of editing actions do they want to perform (e.g., removing or stylizing the subject)? Do people prefer precise edits with predictable outcomes or highly creative ones? By understanding the characteristics of real-world requests and the corresponding edits made by freelance photo-editing wizards, can we draw lessons for improving AI-based editors and determine which types of requests can currently be handled successfully by AI editors? In this paper, we present a unique study addressing these questions by analyzing 83k requests from the past 12 years (2013-2025) on the Reddit community, which collected 305k PSR-wizard edits. According to human ratings, approximately only 33% of requests can be fulfilled by the best AI editors (including GPT-4o, Gemini-2.0-Flash, SeedEdit). Interestingly, AI editors perform worse on low-creativity requests that require precise editing than on more open-ended tasks. They often struggle to preserve the identity of people and animals, and frequently make non-requested touch-ups. On the other side of the table, VLM judges (e.g., o1) perform differently from human judges and may prefer AI edits more than human edits. Code and qualitative examples are available at: https://psrdataset.github.io
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MonetGPT: Solving Puzzles Enhances MLLMs' Image Retouching Skills (2025)
- Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing (2025)
- GIE-Bench: Towards Grounded Evaluation for Text-Guided Image Editing (2025)
- SuperEdit: Rectifying and Facilitating Supervision for Instruction-Based Image Editing (2025)
- Preliminary Explorations with GPT-4o(mni) Native Image Generation (2025)
- Improving Editability in Image Generation with Layer-wise Memory (2025)
- Synthesizing Optimal Object Selection Predicates for Image Editing using Lattices (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper