--- dataset_info: features: - name: repo dtype: string - name: instance_id dtype: string - name: base_commit dtype: string - name: patch dtype: string - name: test_patch dtype: string - name: problem_statement dtype: string - name: hints_text dtype: string - name: created_at dtype: string - name: version dtype: string - name: meta struct: - name: commit_name dtype: string - name: failed_lite_validators sequence: string - name: has_test_patch dtype: bool - name: is_lite dtype: bool - name: num_modified_files dtype: int64 - name: install_config struct: - name: env_vars dtype: 'null' - name: env_yml_path sequence: string - name: install dtype: string - name: log_parser dtype: string - name: no_use_env dtype: bool - name: packages dtype: string - name: pip_packages sequence: string - name: pre_install sequence: string - name: python dtype: string - name: reqs_path sequence: string - name: test_cmd dtype: string - name: FAIL_TO_PASS sequence: string - name: PASS_TO_PASS sequence: string - name: environment_setup_commit dtype: string - name: docker_image dtype: string splits: - name: test num_bytes: 6906899 num_examples: 409 download_size: 1929752 dataset_size: 6906899 configs: - config_name: default data_files: - split: test path: data/test-* license: cc-by-4.0 tags: - code size_categories: - n<1K --- # Dataset Summary SWE-rebench-leaderboard is a continuously updated, curated subset of the full [SWE-rebench](https://huggingface.co/datasets/nebius/SWE-rebench) corpus, tailored for benchmarking software engineering agents on real-world tasks. These tasks are used in the [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard). For more details on the benchmark methodology and data collection process, please refer to our paper [SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents](https://arxiv.org/abs/2505.20411). All Docker images required to run the tasks are pre-built and publicly available on [Docker Hub](https://hub.docker.com/repositories/swerebench). You do not need to build them yourself. The specific image for each task is listed in the `docker_image` column. To get the exact subset of tasks used for a specific month's SWE-rebench-leaderboard, you can filter the dataset by the `created_at` field. # News [2025/08/04] Added 34 July tasks, each with a corresponding Docker image. # How to Use ```python from datasets import load_dataset ds = load_dataset('nebius/SWE-rebench-leaderboard') ds_june_2025 = ds['test'].filter(lambda x: x['created_at'].startswith('2025-06')) ``` # Dataset Structure The SWE-rebench dataset schema extends the original SWE-bench schema with additional fields to support richer analysis. The complete schema is detailed in the table below. For more information about this data and methodology behind collecting it, please refer to our paper. | Field name | Type | Description | |----------------------------|--------|-------------------------------------------------------------------------------------------------| | `instance_id` | str | A formatted instance identifier, usually as `repo_owner__repo_name-PR-number`. | | `patch` | str | The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. | | `repo` | str | The repository owner/name identifier from GitHub. | | `base_commit` | str | The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. | | `hints_text` | str | Comments made on the issue prior to the creation of the solution PR’s first commit creation date. | | `created_at` | str | The creation date of the pull request. | | `test_patch` | str | A test-file patch that was contributed by the solution PR. | | `problem_statement` | str | The issue title and body. | | `version` | str | Installation version to use for running evaluation. | | `environment_setup_commit` | str | Commit hash to use for environment setup and installation. | | `FAIL_TO_PASS` | str | A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. | | `PASS_TO_PASS` | str | A JSON list of strings that represent tests that should pass before and after the PR application. | | `meta` | str | A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not. | | `license_name` | str | The type of license of the repository. | | `install_config` | str | Installation configuration for setting up the repository. | `docker_image` | str | Docker image name for the instance. | To execute tasks from SWE-rebench (i.e., set up their environments, apply patches, and run tests), we provide a [fork](https://github.com/SWE-rebench/SWE-bench-fork) of the original SWE-bench execution framework, adapted for our dataset's structure and features. The primary modification introduces functionality to source environment installation constants directly from the `install_config` field present in each task instance within SWE-rebench. This allows for more flexible and task-specific environment setups. You can find the details of this modification in the [following commit](https://github.com/SWE-rebench/SWE-bench-fork/commit/980d0cca8aa4e73f1d9f894e906370bef8c4de8a) To build the necessary Docker images and run agents on SWE-rebench tasks, you have two main options: 1. **Use our SWE-bench fork directly:** Clone the fork and utilize its scripts for building images and executing tasks. The framework will automatically use the `install_config` from each task. 2. **Integrate similar functionality into your existing codebase:** If you have your own execution framework based on SWE-bench or a different system, you can adapt it by implementing a similar mechanism to parse and utilize the `install_config` field from the SWE-rebench task instances. The aforementioned commit can serve as a reference for this integration. # License The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance. # Citation ```bibtex @misc{badertdinov2025swerebenchautomatedpipelinetask, title={SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents}, author={Ibragim Badertdinov and Alexander Golubev and Maksim Nekrashevich and Anton Shevtsov and Simon Karasik and Andrei Andriushchenko and Maria Trofimova and Daria Litvintseva and Boris Yangel}, year={2025}, eprint={2505.20411}, archivePrefix={arXiv}, primaryClass={cs.SE}, url={https://arxiv.org/abs/2505.20411} }