RExBench / README.md
sebschu's picture
Update submission url.
590936e verified
metadata
license: mit
tags:
  - code
pretty_name: RExBench
extra_gated_prompt: >-
  The gold solutions to extensions and success criteria are not publicly
  available so as to alleviate data contamination concerns. We request that you
  help us to prevent the dataset leaking into future model training data by
  keeping agent submission outputs private.
extra_gated_fields:
  I agree that I will NOT make any agent outputs public (eg NOT upload them to a public GitHub repository) to help prevent dataset leakage: checkbox

Dataset Summary

RExBench is a benchmark to test the ability of coding agents to autonomously implement realistic research hypothesis extensions which have not previously been implemented.

The benchmark consists of 12 research experiment implementation tasks, where each task is set up as an extension to an existing research paper and codebase, accompanied by domain expert-written instructions.

The original RExBench dataset was released as part of the paper RExBench: Can coding agents autonomously implement AI research extensions?

Dataset Structure

.
β”œβ”€β”€ instructions/            # Task-specific instructions
β”‚   β”œβ”€β”€ checkeval/
β”‚   β”œβ”€β”€ cogs/
β”‚   β”œβ”€β”€ entity-tracking-multimodal/
β”‚   β”œβ”€β”€ explain-then-translate/
β”‚   β”œβ”€β”€ implicit-ins/
β”‚   β”œβ”€β”€ mission-impossible/
β”‚   β”œβ”€β”€ othello/
β”‚   β”œβ”€β”€ reasoning-or-reciting/
β”‚   β”œβ”€β”€ re-reading/
β”‚   β”œβ”€β”€ tree-of-thoughts/
β”‚   β”œβ”€β”€ varierr-nli/
β”‚   └── winodict/
└── dataset.zip     # ZIP file with original codebase for each task

The instructions/ directory contains an instructions.md file for each task. The dataset.zip file contains the original codebase for each task, following the same directory structure as instructions/.

Evaluating an agent

You can create a new benchmark submission here: https://rexbench.com/

The submission should be in the form of a single ZIP file, with the following requirements:

  • Must contain one directory for each task: checkeval, cogs, entity-tracking-multimodal, ...
  • Each directory must contain: agent.patch (the patch file with the agent's code edits) and agent.log (the log file detailing the agent's trajectory)

To evaluate agent submissions, we run an automatic evaluation suite to execute agent outputs in a remote environment.

License

We release our data under a dual license (MIT and Apache 2.0), given the mixed license of the repositories included in the full benchmark suite. Please note that this in contrast to the metadata license shown (Hugging Face currently only supports assigning one license to a dataset).