OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis
Abstract
Graphical User Interface (GUI) agents powered by Vision-Language Models (VLMs) have demonstrated human-like computer control capability. Despite their utility in advancing digital automation, a critical bottleneck persists: collecting high-quality trajectory data for training. Common practices for collecting such data rely on human supervision or synthetic data generation through executing pre-defined tasks, which are either resource-intensive or unable to guarantee data quality. Moreover, these methods suffer from limited data diversity and significant gaps between synthetic data and real-world environments. To address these challenges, we propose OS-Genesis, a novel GUI data synthesis pipeline that reverses the conventional trajectory collection process. Instead of relying on pre-defined tasks, OS-Genesis enables agents first to perceive environments and perform step-wise interactions, then retrospectively derive high-quality tasks to enable trajectory-level exploration. A trajectory reward model is then employed to ensure the quality of the generated trajectories. We demonstrate that training GUI agents with OS-Genesis significantly improves their performance on highly challenging online benchmarks. In-depth analysis further validates OS-Genesis's efficiency and its superior data quality and diversity compared to existing synthesis methods. Our codes, data, and checkpoints are available at https://qiushisun.github.io/OS-Genesis-Home/{OS-Genesis Homepage}.
Community
This paper introduces OS-Genesis, an interaction-driven pipeline for synthesizing high-quality and diverse GUI agent trajectory data without human supervision or predefined tasks. By leveraging reverse task synthesis and a trajectory reward model, OS-Genesis enables effective end2end training of GUI agents.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AgentTrek: Agent Trajectory Synthesis via Guiding Replay with Web Tutorials (2024)
- Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction (2024)
- Ponder&Press: Advancing Visual GUI Agent towards General Computer Control (2024)
- Multi-modal Agent Tuning: Building a VLM-Driven Agent for Efficient Tool Usage (2024)
- GUI Agents: A Survey (2024)
- Aria-UI: Visual Grounding for GUI Instructions (2024)
- Large Language Model-Brained GUI Agents: A Survey (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Thank you for a great read!
Here's my summary:
The main bottleneck in building GUI agents it to find training data.
GUI Agent trajectories are not easy to get by. Crowdsourcing trajectories, then manually annotating them, could be an option, but at scale, it's hard to do
You could use synthetic data generation (ask 1000s small existing GUI agents to solve tasks, keep only successful runs). But then it's hard to come up with many high level-tasks.
ā”ļø Well, a novel technique was just published that creates a new promising paradigm for synthetic data generation: Shanghai AI Lab researchers propose OS-Genesis, a novel way to create training data for GUI agents that flips the traditional approach on its head. Instead of starting with predefined tasks and having humans or machines execute them, OS-Genesis first explores the interface naturally, then derives meaningful tasks from those interactions.
š Exploration-driven vs task-driven approach:
ā£ Instead of starting with tasks, OS-Genesis first explores GUIs by clicking and interacting
ā£ It then reverse-engineers high-level tasks from successful interaction patterns
ā£ This leads to more natural and diverse training data than predefined tasks
šÆ Novel reward model for trajectory quality:
ā£ Rather than discarding incomplete trajectories, OS-Genesis scores them based on coherence and completion
ā£ This preserves valuable partial successes that would otherwise be wasted
š Superior results across environments:
ā£ Nearly doubles performance on AndroidWorld (9.8% ā 17.4%)
By the way, this field of GUI agents is still in infancy, so you can still make a difference with "low-cost" setups: their paper gets SOTA results with only 8xA100!
Models citing this paper 9
Browse 9 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper