Datasets:

ArXiv:
License:
LearnGUI / README.md
lgy0404's picture
Update README.md
49f286f verified
|
raw
history blame
6.05 kB
---
license: apache-2.0
size_categories:
- 1K<n<10K
---
# LearnGUI: A Unified Demonstration Benchmark for Mobile GUI Agents
<div align="center">
<img src="assets/teaser-final.drawio.png" alt="The LearnAct Framework and LearnGUI Benchmark focus on addressing the long-tail challenges in mobile GUI agent performance through demonstration-based learning." width="100%">
</div>
[πŸ“„ Paper](https://arxiv.org/abs/2504.13805) | [πŸ’» Code](https://github.com/lgy0404/LearnAct-codebase) | [🌐 Project Page](https://lgy0404.github.io/LearnAct/)
## Overview
LearnGUI is the first comprehensive dataset specifically designed for studying demonstration-based learning in mobile GUI agents. It comprises 2,353 instructions across 73 applications with an average of 13.2 steps per task, featuring high-quality human demonstrations for both offline and online evaluation scenarios.
## 🌟 Key Features
- **Unified Benchmark Framework**: Provides standardized metrics and evaluation protocols for demonstration-based learning in mobile GUI agents
- **Dual Evaluation Modes**: Supports both offline (2,252 tasks) and online (101 tasks) evaluation scenarios to assess agent performance
- **Rich Few-shot Learning Support**: Includes k-shot combinations (k=1,2,3) for each task with varying similarity profiles
- **Multi-dimensional Similarity Metrics**: Quantifies demonstration relevance across instruction, UI, and action dimensions
- **Diverse Real-world Coverage**: Spans 73 mobile applications with 2,353 naturally varied tasks reflecting real-world usage patterns
- **Expert-annotated Trajectories**: Contains high-quality human demonstrations with detailed step-by-step action sequences and element annotations
## πŸ“Š Dataset Structure and Statistics
The dataset is organized into three main splits:
### Dataset Statistics
| Split | K-shot | Tasks | Apps | Step actions | Avg Ins<sub>Sim</sub> | Avg UI<sub>Sim</sub> | Avg Act<sub>Sim</sub> | UI<sub>SH</sub>Act<sub>SH</sub> | UI<sub>SH</sub>Act<sub>SL</sub> | UI<sub>SL</sub>Act<sub>SH</sub> | UI<sub>SL</sub>Act<sub>SL</sub> |
|-------|--------|-------|------|-------------|------------------------|----------------------|----------------------|--------------------------------|--------------------------------|--------------------------------|--------------------------------|
| Offline-Train | 1-shot | 2,001 | 44 | 26,184 | 0.845 | 0.901 | 0.858 | 364 | 400 | 403 | 834 |
| Offline-Train | 2-shot | 2,001 | 44 | 26,184 | 0.818 | 0.898 | 0.845 | 216 | 360 | 358 | 1,067 |
| Offline-Train | 3-shot | 2,001 | 44 | 26,184 | 0.798 | 0.895 | 0.836 | 152 | 346 | 310 | 1,193 |
| Offline-Test | 1-shot | 251 | 9 | 3,469 | 0.798 | 0.868 | 0.867 | 37 | 49 | 56 | 109 |
| Offline-Test | 2-shot | 251 | 9 | 3,469 | 0.767 | 0.855 | 0.853 | 15 | 42 | 55 | 139 |
| Offline-Test | 3-shot | 251 | 9 | 3,469 | 0.745 | 0.847 | 0.847 | 10 | 36 | 49 | 156 |
| Online-Test | 1-shot | 101 | 20 | 1,423 | - | - | - | - | - | - | - |
Each task in LearnGUI contains:
- High-level instruction
- Low-level action sequences
- Screenshot of each step
- UI element details
- Ground truth action labels
- Demonstration pairings with varying similarity profiles
## πŸ“ Directory Structure
```
LearnGUI/
β”œβ”€β”€ offline/ # Offline evaluation dataset
β”‚ β”œβ”€β”€ screenshot.zip # Screenshot archives (multi-part)
β”‚ β”œβ”€β”€ screenshot.z01-z05 # Screenshot archive parts
β”‚ β”œβ”€β”€ element_anno.zip # Element annotations
β”‚ β”œβ”€β”€ instruction_anno.zip # Instruction annotations
β”‚ β”œβ”€β”€ task_spilit.json # Task splitting information
β”‚ └── low_level_instructions.json # Detailed step-by-step instructions
β”‚
β”œβ”€β”€ online/ # Online evaluation dataset
β”‚ β”œβ”€β”€ low_level_instructions/ # JSON files with step instructions for each task
β”‚ β”‚ β”œβ”€β”€ AudioRecorderRecordAudio.json
β”‚ β”‚ β”œβ”€β”€ BrowserDraw.json
β”‚ β”‚ β”œβ”€β”€ SimpleCalendarAddOneEvent.json
β”‚ β”‚ └── ... (98 more task instruction files)
β”‚ └── raw_data/ # Raw data for each online task
β”‚ β”œβ”€β”€ AudioRecorderRecordAudio/
β”‚ β”œβ”€β”€ BrowserDraw/
β”‚ β”œβ”€β”€ SimpleCalendarAddOneEvent/
β”‚ └── ... (98 more task data directories)
β”‚
└── static/ # Website assets and images
└── images/ # Dataset visualization images
```
## πŸ” Comparison with Existing Datasets
LearnGUI offers several advantages over existing GUI datasets:
| Dataset | # Inst. | # Apps | # Step | Env. | HL | LL | GT | FS |
|---------|---------|--------|--------|------|----|----|----|----|
| PixelHelp | 187 | 4 | 4.2 | ❌ | βœ… | ❌ | βœ… | ❌ |
| MoTIF | 276 | 125 | 4.5 | ❌ | βœ… | βœ… | βœ… | ❌ |
| UIBert | 16,660 | - | 1 | ❌ | ❌ | βœ… | βœ… | ❌ |
| UGIF | 523 | 12 | 6.3 | ❌ | βœ… | βœ… | βœ… | ❌ |
| AITW | 30,378 | 357 | 6.5 | ❌ | βœ… | ❌ | βœ… | ❌ |
| AITZ | 2,504 | 70 | 7.5 | ❌ | βœ… | βœ… | βœ… | ❌ |
| AndroidControl | 15,283 | 833 | 4.8 | ❌ | βœ… | βœ… | βœ… | ❌ |
| AMEX | 2,946 | 110 | 12.8 | ❌ | βœ… | ❌ | βœ… | ❌ |
| MobileAgentBench | 100 | 10 | - | ❌ | βœ… | ❌ | ❌ | ❌ |
| AppAgent | 50 | 10 | - | ❌ | βœ… | ❌ | ❌ | ❌ |
| LlamaTouch | 496 | 57 | 7.01 | βœ… | βœ… | ❌ | βœ… | ❌ |
| AndroidWorld | 116 | 20 | - | βœ… | βœ… | ❌ | ❌ | ❌ |
| AndroidLab | 138 | 9 | 8.5 | βœ… | βœ… | ❌ | ❌ | ❌ |
| **LearnGUI (Ours)** | **2,353** | **73** | **13.2** | βœ… | βœ… | βœ… | βœ… | βœ… |
*Note: # Inst. (number of instructions), # Apps (number of applications), # Step (average steps per task), Env. (supports environment interactions), HL (has high-level instructions), LL (has low-level instructions), GT (provides ground truth trajectories), FS (supports few-shot learning).*
## πŸ“„ License
This dataset is licensed under Apache License 2.0.