--- license: mit tags: - code pretty_name: RExBench extra_gated_prompt: The gold solutions to extensions and success criteria are not publicly available so as to alleviate data contamination concerns. We request that you help us to prevent the dataset leaking into future model training data by keeping agent submission outputs private. extra_gated_fields: { I agree that I will NOT make any agent outputs public (eg NOT upload them to a public GitHub repository) to help prevent dataset leakage: checkbox } --- **Dataset Summary** RExBench is a benchmark to test the ability of coding agents to autonomously implement realistic research hypothesis extensions which have not previously been implemented. The benchmark consists of 12 research experiment implementation tasks, where each task is set up as an extension to an existing research paper and codebase, accompanied by domain expert-written instructions. The original RExBench dataset was released as part of the paper *RExBench: Can coding agents autonomously implement AI research extensions?* **Dataset Structure** ```bash . ├── instructions/ # Task-specific instructions │ ├── checkeval/ │ ├── cogs/ │ ├── entity-tracking-multimodal/ │ ├── explain-then-translate/ │ ├── implicit-ins/ │ ├── mission-impossible/ │ ├── othello/ │ ├── reasoning-or-reciting/ │ ├── re-reading/ │ ├── tree-of-thoughts/ │ ├── varierr-nli/ │ └── winodict/ └── dataset.zip # ZIP file with original codebase for each task ``` The `instructions/` directory contains an `instructions.md` file for each task. The `dataset.zip` file contains the original codebase for each task, following the same directory structure as `instructions/`. **Evaluating an agent** You can create a new benchmark submission here: https://rexbench.com/ The submission should be in the form of a single ZIP file, with the following requirements: - Must contain one directory for each task: **checkeval**, **cogs**, **entity-tracking-multimodal**, ... - Each directory must contain: **agent.patch** (the patch file with the agent's code edits) and **agent.log** (the log file detailing the agent's trajectory) To evaluate agent submissions, we run an automatic evaluation suite to execute agent outputs in a remote environment. **License** We release our data under a dual license (MIT and Apache 2.0), given the mixed license of the repositories included in the full benchmark suite. Please note that this in contrast to the metadata license shown (Hugging Face currently only supports assigning one license to a dataset).