---
license: openrail
task_categories:
- text-generation
- multiple-choice
language:
- en
tags:
- web agent
- agent
pretty_name: MultiModal-Mind2Web~ (test split, snapshot with seed 42, 20 distractors)
size_categories:
- 1K
rabbit inc.
[Leaderboard & Blogpost to be released]
Configuration: test split, snapshot with seed 42, 20 distractors
[Multimodal-Mind2Web]() is a dataset proposed by [Boyuan et al.](). It's designed for the development and evaluation of generalist web agents and includes various action trajectories of humans on real websites.
We've simplified the raw dump from both Multimodal-Mind2Web and Mind2Web into sequences of observation-action pairs. We've also adapted prompting and DOM-encoding techniques from [SeeAct](). This allows us to reformulate the problem of action generation, localization (terminology used in the large action model, or LAM) / element grounding, and reasoning of action (also terminology used in LAM) / action grounding into a straightforward text-generation and multiple-choice problem. This simplification makes the dataset viable as a generic evaluation for a vision language model (VLM). The dataset includes prompts (`prompt_0`, `prompt_1`) in a chat format, which makes it easier to use a VLM for evaluation and lowers the implementation barrier common in evaluation frameworks of computer-using agents.
We're currently evaluating state-of-the-art models on the dataset and are gradually providing access to a more comprehensive Gym-compatible evaluation environment. This environment will allow for offline and online evaluations of agents, offering more structural and fundamental improvements over existing benchmarks like MultiModal-Mind2Web. We will share our findings and release the full leaderboard in a blog post on soon.
### Preliminary Evaluation Results
* Operation token F1 is calculated with respect to `cl100k_base`. We preprocess the text to be lower-case regardless of what the VLM outputs.
* Raw VLM outputs are parsed in a similar fashion according to SeeAct, which we will explain in more detail in the blog post.
* For all metrics, higher is better.
| model | Step Success Rate | Task Success Rate | Operation Token F1 | Element Accuracy |
|:---------------------------|:--------------------|:--------------------|:---------------------|:-------------------|
| claude-3-5-sonnet-20240620 | **0.3847** | **0.0352** | **0.8104** | **0.5005** |
| gemini-1.5-flash-001 | 0.3203 | 0.0300 | 0.7764 | 0.3861 |
| claude-3-opus-20240229 | 0.3048 | 0.0141 | 0.8048 | 0.3720 |
| claude-3-sonnet-20240229 | 0.2770 | 0.0282 | 0.7241 | 0.3528 |
| gpt-4o | 0.2702 | 0.0211 | 0.6239 | 0.3602 |
| gemini-1.5-pro-001 | 0.2191 | 0.0000 | 0.7151 | 0.3453 |
| claude-3-haiku-20240307 | 0.2068 | 0.0000 | 0.7835 | 0.2577 |
### Dataset Structure
* `task_id` (str): unique id for each task, equivalent to `annotation_id` in MultiModal-Mind2Web.
* `split` (str): dataset split, one of (`test_website`, `test_task` and `test_domain`), equivalent to the split in MultiModal-Mind2Web.
* `step` (int64): the index of the step (starting from zero) this particular action belongs to within the trajectory that it is recorded. Equivalent to `target_action_index` in MultiModal-Mind2Web.
* `task_description` (str): description of the task representing user intent, equivalent to `confirmed_task` in MultiModal-Mind2Web.
* `prompt_0` (str): prompt to generate action description. Contains image input.
* `prompt_1` (str): prompt to perform action and element grounding, used in conjunction with `prompt_0` and outputs of a previous invocation of a VLM.
* `raw_html` (str): raw html of the page before the action is performed, consistent with the raw Mind2Web dump.
* `cleaned_html` (str): sanitized html of the page before the action is performed, similar to `cleaned_html` in MultiModal-Mind2Web.
* `candidates` (sequence[str]): sampled sanitized html representation of candidates of salient DOM elements in this particular snapshot. One element belongs to `pos_candidates` and the rest belong to `neg_candidates` in MultiModal-Mind2Web.
* `target_elements` (sequence[str]): sanitized html representation of viable DOM elements in the webpage that the action is performed on. All elements can be found in `pos-candidates` in MultiModal-Mind2Web.
* `target_op` (str): the operation that should be performed, must be one of `CLICK`, `TYPE`, and `SELECT`. Equivalent to `operation.op` in MultiModal-Mind2Web.
* `target_op_value` (str): the argument supplied to the operation that should be performed. May be empty; equivalent to `operation.value` in MultiModal-Mind2Web.
* `website` (str): website name, equivalent to `website` in MultiModal-Mind2Web.
* `domain` (str): website domain, equivalent to `website` in MultiModal-Mind2Web.
* `subdomain` (str): website subdomain, equivalent to `website` in MultiModal-Mind2Web.
* `is_valid` (str): whether this row is valid for evaluation. Rows with `is_valid = False` must be excluded when calculating average step-wise performance, or task- and trajectory-level performance. A row that is invalid could either have an empty screenshot, or does not have a positive element in the sanitized html.
### Improvements from MultiModal-Mind2Web
1. For all test splits, `raw_html` is not available in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. From [1](), [2]() and [3](), the values in the column are the same as those of `cleaned_html`. We re-associated each action with the raw html from the original Mind2Web dump to overcome this challenge.
2. For all test splits, 11 rows have no screenshot in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. This will make any agent using screenshots as part of its action generation routine fail, which will affect both step-level and task-level metrics. We have labeled these rows with `is_valid = False` to signal to model evaluators while maintaining the completeness of the action trajectory.
3. For all test splits, 761 rows have no ground truth element in `cleaned_html` in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. This will make any agent fail during element grounding, which will affect both step-level and task-level metrics. We have labeled these rows with `is_valid = False` to signal to model evaluators while maintaining the completeness of the action trajectory.
4. We have also simplified the sanitized representation of DOM elements, such as shortening `backend_node_id` into `bnid` and preserving more structure in the candidate tree representation. We will explain our implementation in more detail in the blog post, as well as providing a detailed example comparing MultiModal-Mind2Web's representation and ours.
### Assumptions and Problem Definition
A common subroutine of web agents ([MindAct](https://arxiv.org/abs/2306.06070), SeeAct, LAM) is a retriever that identifies salient DOM elements relevant to the action. This localization/element grounding can be reframed as a multiple-choice/re-ranking problem where the VLM must choose an applicable candidate for the action. Since this subroutine is not a universal component of a computer-using agent and is beyond the scope of evaluating a generic VLM's agent-related capabilities, *MultiModal-Mind2Web~* assumes the existence of a strong ranker.
Given a distractor parameter k (in this case, 20), we sample k candidates from the negative pool (provided by the heuristic in MultiModal-Mind2Web) and randomly select a ground truth element from the positive pool to construct the scrambled list of candidates available to the VLM. This simulates the existence of a ranker with a nonzero precision at k+1 (P@k+1 > 0). Randomness is controlled through seeding so that the same sets of elements are always selected and appear in the same positions in the scrambled list. All snapshot datasets released by us are seeded with 42.
> A snapshot with 10 distractors will have a stronger assumption on the existence of a more powerful retriever with nonzero P@11 compared to a snapshot with 30 distractors (P@31 > 0). This treatment helps MultiModal-Mind2Web to be a very accessible and generic benchmark for VLMs without a complex, stateful setup. It also directly affects the context length required for the VLM and the difficulty of the benchmark in terms of assessing VLM's in-context learning capabilities.
Agent evaluations, whether offline or online, are always dynamic. We have internally built a generic environment to enable candidate sampling as well as simulation of various online environments to evaluate agents. The dataset is taken from a particular episode, hence the name of a "snapshot".
### Usage as a generic VLM eval
*MultiModal-Mind2Web~* can be used as a generic eval of a VLM to assess various aspects of grounded UI understanding and planning, and could be run in addition to existing generalized benchmarks like [MMMU](https://mmmu-benchmark.github.io/). Below is an example implementation of a baseline `gpt-4o` agent using the dataset over two rounds of action generation and grounding:
```python
from openai import OpenAI
client = OpenAI()
def deduce_action(prompt_0, prompt_1):
action_prompt = prompt_0
grounding_prompt = prompt_1
resp1 = client.chat.completions.create(
model="gpt-4o",
messages=action_prompt,
max_tokens=500,
temperature=0,
)
response = resp1.choices[0].message.content
grounding_prompt = (
action_prompt
+ [
{
"role": "assistant",
"content": [{"type": "text", "text": f"\n\n{response}"}],
},
]
+ grounding_prompt
)
resp2 = client.chat.completions.create(
model="gpt-4o",
messages=grounding_prompt,
max_tokens=500,
temperature=0,
)
final_response = resp2.choices[0].message.content
return final_response
```
Where `prompt_0` and `prompt_1` correspond to the column values in the files, and `final_response` can be either parsed or evaluated against the target values `target_elements`, `target_op` and `target_op_value` via a VQA model.