Datasets:

ArXiv:
License:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
page_caption: string
image_path: string
clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string, idx: int64>>
scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>
vs
image_path: string
clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>
scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              page_caption: string
              image_path: string
              clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string, idx: int64>>
              scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>
              vs
              image_path: string
              clickable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>
              scrollable_elements: list<item: struct<bbox: list<item: int64>, xml_desc: list<item: string>, functionality: string>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

LearnGUI: A Unified Demonstration Benchmark for Mobile GUI Agents

The LearnAct Framework and LearnGUI Benchmark focus on addressing the long-tail challenges in mobile GUI agent performance through demonstration-based learning.

πŸ“„ Paper | πŸ’» Code | 🌐 Project Page

Overview

LearnGUI is the first comprehensive dataset specifically designed for studying demonstration-based learning in mobile GUI agents. It comprises 2,353 instructions across 73 applications with an average of 13.2 steps per task, featuring high-quality human demonstrations for both offline and online evaluation scenarios.

🌟 Key Features

  • Unified Benchmark Framework: Provides standardized metrics and evaluation protocols for demonstration-based learning in mobile GUI agents
  • Dual Evaluation Modes: Supports both offline (2,252 tasks) and online (101 tasks) evaluation scenarios to assess agent performance
  • Rich Few-shot Learning Support: Includes k-shot combinations (k=1,2,3) for each task with varying similarity profiles
  • Multi-dimensional Similarity Metrics: Quantifies demonstration relevance across instruction, UI, and action dimensions
  • Diverse Real-world Coverage: Spans 73 mobile applications with 2,353 naturally varied tasks reflecting real-world usage patterns
  • Expert-annotated Trajectories: Contains high-quality human demonstrations with detailed step-by-step action sequences and element annotations

πŸ“Š Dataset Structure and Statistics

The dataset is organized into three main splits:

Dataset Statistics

Split K-shot Tasks Apps Step actions Avg InsSim Avg UISim Avg ActSim UISHActSH UISHActSL UISLActSH UISLActSL
Offline-Train 1-shot 2,001 44 26,184 0.845 0.901 0.858 364 400 403 834
Offline-Train 2-shot 2,001 44 26,184 0.818 0.898 0.845 216 360 358 1,067
Offline-Train 3-shot 2,001 44 26,184 0.798 0.895 0.836 152 346 310 1,193
Offline-Test 1-shot 251 9 3,469 0.798 0.868 0.867 37 49 56 109
Offline-Test 2-shot 251 9 3,469 0.767 0.855 0.853 15 42 55 139
Offline-Test 3-shot 251 9 3,469 0.745 0.847 0.847 10 36 49 156
Online-Test 1-shot 101 20 1,423 - - - - - - -

Each task in LearnGUI contains:

  • High-level instruction
  • Low-level action sequences
  • Screenshot of each step
  • UI element details
  • Ground truth action labels
  • Demonstration pairings with varying similarity profiles

πŸ“ Directory Structure

LearnGUI/
β”œβ”€β”€ offline/                            # Offline evaluation dataset
β”‚   β”œβ”€β”€ screenshot.zip                  # Screenshot archives (multi-part)
β”‚   β”œβ”€β”€ screenshot.z01-z05              # Screenshot archive parts
β”‚   β”œβ”€β”€ element_anno.zip                # Element annotations
β”‚   β”œβ”€β”€ instruction_anno.zip            # Instruction annotations
β”‚   β”œβ”€β”€ task_spilit.json                # Task splitting information
β”‚   └── low_level_instructions.json     # Detailed step-by-step instructions
β”‚
└── online/                             # Online evaluation dataset
    β”œβ”€β”€ low_level_instructions/         # JSON files with step instructions for each task
    β”‚   β”œβ”€β”€ AudioRecorderRecordAudio.json
    β”‚   β”œβ”€β”€ BrowserDraw.json
    β”‚   β”œβ”€β”€ SimpleCalendarAddOneEvent.json
    β”‚   └── ... (98 more task instruction files)
    └── raw_data/                       # Raw data for each online task
        β”œβ”€β”€ AudioRecorderRecordAudio/
        β”œβ”€β”€ BrowserDraw/
        β”œβ”€β”€ SimpleCalendarAddOneEvent/
        └── ... (98 more task data directories)

πŸ” Comparison with Existing Datasets

LearnGUI offers several advantages over existing GUI datasets:

Dataset # Inst. # Apps # Step Env. HL LL GT FS
PixelHelp 187 4 4.2 ❌ βœ… ❌ βœ… ❌
MoTIF 276 125 4.5 ❌ βœ… βœ… βœ… ❌
UIBert 16,660 - 1 ❌ ❌ βœ… βœ… ❌
UGIF 523 12 6.3 ❌ βœ… βœ… βœ… ❌
AITW 30,378 357 6.5 ❌ βœ… ❌ βœ… ❌
AITZ 2,504 70 7.5 ❌ βœ… βœ… βœ… ❌
AndroidControl 15,283 833 4.8 ❌ βœ… βœ… βœ… ❌
AMEX 2,946 110 12.8 ❌ βœ… ❌ βœ… ❌
MobileAgentBench 100 10 - ❌ βœ… ❌ ❌ ❌
AppAgent 50 10 - ❌ βœ… ❌ ❌ ❌
LlamaTouch 496 57 7.01 βœ… βœ… ❌ βœ… ❌
AndroidWorld 116 20 - βœ… βœ… ❌ ❌ ❌
AndroidLab 138 9 8.5 βœ… βœ… ❌ ❌ ❌
LearnGUI (Ours) 2,353 73 13.2 βœ… βœ… βœ… βœ… βœ…

Note: # Inst. (number of instructions), # Apps (number of applications), # Step (average steps per task), Env. (supports environment interactions), HL (has high-level instructions), LL (has low-level instructions), GT (provides ground truth trajectories), FS (supports few-shot learning).

πŸ“„ License

This dataset is licensed under Apache License 2.0.

Downloads last month
0