The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ValueError
Message: Expected object or value
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 242, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column(/actions/[]/[]) changed from number to array in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 256, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 106, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or valueNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
IDM Eval Set
A validation set for evaluating Inverse Dynamics Models on macOS screen recordings. Each sample is a 5-second clip of real desktop usage (browser, IDE, terminal) paired with a ground-truth action log captured at the OS level.
The task: given a short screen recording, predict the sequence of user input actions (keypresses, mouse clicks, scrolls) that produced the observed screen changes.
Dataset Structure
clips_recording_{uuid}_seg{N}/
clip_000_{tag}.mp4 # 5s screen recording (1728x1080)
clip_000_{tag}.json # ground truth action log
annotations.json # visibility labels per action
gt_overrides.json # manual corrections to GT details
Stats
| Clips | 51 |
| Recordings | 11 |
| Total raw actions | 10,914 |
| Resolution | 1728 x 1080 |
| Clip duration | 5 seconds |
Tag distribution:
| Tag | Count |
|---|---|
| scroll/drag | 18 |
| keystroke-heavy | 17 |
| mixed | 4 |
| click-heavy | 4 |
| hotkeys | 2 |
| hard-case | 1 |
Action Log Format
Each clip JSON contains:
{
"start_s": 206.913,
"end_s": 211.913,
"tag": "keystroke-heavy",
"actions": [
[206933331, ["KeyPress", [32, "Space"]]],
[207233331, ["KeyRelease", [32, "Space"]]],
[208633331, ["MousePress", ["Left", 0, 0]]],
[209533331, ["MouseScroll", [0, -1, 0, 0]]]
]
}
- Timestamps are absolute microseconds (subtract
start_s * 1e6for clip-relative) - Action types:
KeyPress,KeyRelease,MousePress,MouseRelease,MouseMove,MouseScroll,ContextChanged - KeyPress params:
[keycode, key_name] - MousePress params:
[button, x, y](coordinates not captured in this version) - MouseScroll params:
[dx, dy, x, y]
Annotations
annotations.json contains manual visibility labels for each primary action (KeyPress, MousePress, MouseScroll) in each clip. Keys are clip paths, values map action indices to visibility labels:
| Label | Count | Meaning |
|---|---|---|
visible |
485 | Effect is directly visible in the frames |
inferable |
273 | Effect can be inferred but isn't directly visible |
ambiguous |
25 | Action type is unclear from video (e.g. scroll via mouse vs keyboard) |
not_predictable |
27 | Cannot be predicted from video alone |
{
"clips_recording_.../clip_003_keystroke-heavy": {
"0": "visible",
"1": "inferable",
"2": "ambiguous",
"3": "not_predictable"
}
}
Use these to filter ground truth when scoring — e.g. exclude not_predictable and optionally ambiguous actions from recall calculations.
GT Overrides
gt_overrides.json contains manual corrections to ground-truth action details (e.g. when a modifier key was held from before the clip). Structure:
{
"clips_recording_.../clip_name": {
"edits": {"5": "Cmd+Tab"},
"deletions": [],
"additions": [{"frame": 8, "type": "KeyPress", "detail": "Space"}]
}
}
Apply overrides after processing raw GT through filter_gt_actions + coalesce_gt_events.
Gesture Evaluation (Mouse Movement + Scroll)
In addition to sparse event evaluation (KeyPress, MouseClick, MouseScroll), this dataset supports gesture evaluation: predicting per-frame mouse cursor movement and scroll magnitude.
Gesture GT Files
gesture_gt_exp.json— Exponential bin indices (±1 to ±9 per axis)gesture_gt_norm.json— Normalized 0-1000 scale (resolution-independent)
Both are derived from the raw MouseMove and MouseScroll events in each clip JSON. Mouse deltas are accumulated per frame (5fps), normalized by video resolution, then binned (exp mode) or kept as integers (norm mode). Scroll sign convention: positive = scroll down (content moves up).
Format
- MouseMove details:
"dx,dy"— signed bin indices (exp) or normalized integers (norm). Positive dx = right, positive dy = down. - MouseScroll details: signed bin index or normalized integer. Positive = scroll down.
Exponential Bin Scale (mouse dx/dy)
| Bin | Normalized range (0-1000) |
|---|---|
| ±1 | 0-1 |
| ±2 | 1-3 |
| ±3 | 3-7 |
| ±4 | 7-16 |
| ±5 | 16-40 |
| ±6 | 40-95 |
| ±7 | 95-230 |
| ±8 | 230-550 |
| ±9 | >550 |
Metrics
- R² (coefficient of determination): primary metric for mouse movement quality. VPT (OpenAI) reports R²=0.97 on their trained IDM.
- L2: Euclidean distance between predicted and GT bin vectors (lower = better).
- F1: frame-level detection (did the model predict the right frames?).
- DirAcc: direction accuracy (did dx/dy signs match?).
Evaluation
- Downloads last month
- 134