The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: page_caption: string image_path: string clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string, idx: int64>> scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>> vs image_path: string clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>> scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: page_caption: string image_path: string clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string, idx: int64>> scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>> vs image_path: string clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>> scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LearnGUI: A Unified Demonstration Benchmark for Mobile GUI Agents

π Paper | π» Code | π Project Page
Overview
LearnGUI is the first comprehensive dataset specifically designed for studying demonstration-based learning in mobile GUI agents. It comprises 2,353 instructions across 73 applications with an average of 13.2 steps per task, featuring high-quality human demonstrations for both offline and online evaluation scenarios.
π Key Features
- Unified Benchmark Framework: Provides standardized metrics and evaluation protocols for demonstration-based learning in mobile GUI agents
- Dual Evaluation Modes: Supports both offline (2,252 tasks) and online (101 tasks) evaluation scenarios to assess agent performance
- Rich Few-shot Learning Support: Includes k-shot combinations (k=1,2,3) for each task with varying similarity profiles
- Multi-dimensional Similarity Metrics: Quantifies demonstration relevance across instruction, UI, and action dimensions
- Diverse Real-world Coverage: Spans 73 mobile applications with 2,353 naturally varied tasks reflecting real-world usage patterns
- Expert-annotated Trajectories: Contains high-quality human demonstrations with detailed step-by-step action sequences and element annotations
π Dataset Structure and Statistics
The dataset is organized into three main splits:
Dataset Statistics
Split | K-shot | Tasks | Apps | Step actions | Avg InsSim | Avg UISim | Avg ActSim | UISHActSH | UISHActSL | UISLActSH | UISLActSL |
---|---|---|---|---|---|---|---|---|---|---|---|
Offline-Train | 1-shot | 2,001 | 44 | 26,184 | 0.845 | 0.901 | 0.858 | 364 | 400 | 403 | 834 |
Offline-Train | 2-shot | 2,001 | 44 | 26,184 | 0.818 | 0.898 | 0.845 | 216 | 360 | 358 | 1,067 |
Offline-Train | 3-shot | 2,001 | 44 | 26,184 | 0.798 | 0.895 | 0.836 | 152 | 346 | 310 | 1,193 |
Offline-Test | 1-shot | 251 | 9 | 3,469 | 0.798 | 0.868 | 0.867 | 37 | 49 | 56 | 109 |
Offline-Test | 2-shot | 251 | 9 | 3,469 | 0.767 | 0.855 | 0.853 | 15 | 42 | 55 | 139 |
Offline-Test | 3-shot | 251 | 9 | 3,469 | 0.745 | 0.847 | 0.847 | 10 | 36 | 49 | 156 |
Online-Test | 1-shot | 101 | 20 | 1,423 | - | - | - | - | - | - | - |
Each task in LearnGUI contains:
- High-level instruction
- Low-level action sequences
- Screenshot of each step
- UI element details
- Ground truth action labels
- Demonstration pairings with varying similarity profiles
π Directory Structure
LearnGUI/
βββ offline/ # Offline evaluation dataset
β βββ screenshot.zip # Screenshot archives (multi-part)
β βββ screenshot.z01-z05 # Screenshot archive parts
β βββ element_anno.zip # Element annotations
β βββ instruction_anno.zip # Instruction annotations
β βββ task_spilit.json # Task splitting information
β βββ low_level_instructions.json # Detailed step-by-step instructions
β
βββ online/ # Online evaluation dataset
βββ low_level_instructions/ # JSON files with step instructions for each task
β βββ AudioRecorderRecordAudio.json
β βββ BrowserDraw.json
β βββ SimpleCalendarAddOneEvent.json
β βββ ... (98 more task instruction files)
βββ raw_data/ # Raw data for each online task
βββ AudioRecorderRecordAudio/
βββ BrowserDraw/
βββ SimpleCalendarAddOneEvent/
βββ ... (98 more task data directories)
π Comparison with Existing Datasets
LearnGUI offers several advantages over existing GUI datasets:
Dataset | # Inst. | # Apps | # Step | Env. | HL | LL | GT | FS |
---|---|---|---|---|---|---|---|---|
PixelHelp | 187 | 4 | 4.2 | β | β | β | β | β |
MoTIF | 276 | 125 | 4.5 | β | β | β | β | β |
UIBert | 16,660 | - | 1 | β | β | β | β | β |
UGIF | 523 | 12 | 6.3 | β | β | β | β | β |
AITW | 30,378 | 357 | 6.5 | β | β | β | β | β |
AITZ | 2,504 | 70 | 7.5 | β | β | β | β | β |
AndroidControl | 15,283 | 833 | 4.8 | β | β | β | β | β |
AMEX | 2,946 | 110 | 12.8 | β | β | β | β | β |
MobileAgentBench | 100 | 10 | - | β | β | β | β | β |
AppAgent | 50 | 10 | - | β | β | β | β | β |
LlamaTouch | 496 | 57 | 7.01 | β | β | β | β | β |
AndroidWorld | 116 | 20 | - | β | β | β | β | β |
AndroidLab | 138 | 9 | 8.5 | β | β | β | β | β |
LearnGUI (Ours) | 2,353 | 73 | 13.2 | β | β | β | β | β |
Note: # Inst. (number of instructions), # Apps (number of applications), # Step (average steps per task), Env. (supports environment interactions), HL (has high-level instructions), LL (has low-level instructions), GT (provides ground truth trajectories), FS (supports few-shot learning).
π License
This dataset is licensed under Apache License 2.0.
- Downloads last month
- 0