Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -94,17 +94,18 @@ configs:
|
|
| 94 |
data_files:
|
| 95 |
- split: test
|
| 96 |
path: data/test-*
|
|
|
|
| 97 |
---
|
| 98 |
-
# Dataset Summary
|
| 99 |
-
SWE-rebench is a large-scale dataset designed to support the training and evaluation of LLM-based software engineering (SWE) agents. It is built using a fully automated pipeline that continuously extracts real-world GitHub tasks at scale. The dataset includes over 21,000 issue–pull request pairs from 6,000+ Python repositories, each validated for correctness through environment setup and test execution.
|
| 100 |
|
| 101 |
-
* SWE-rebench expands on the methodology introduced in SWE-bench by adding:
|
| 102 |
|
| 103 |
-
|
| 104 |
|
| 105 |
-
|
|
|
|
| 106 |
|
| 107 |
-
*
|
|
|
|
|
|
|
| 108 |
|
| 109 |
# How to Use
|
| 110 |
|
|
@@ -114,7 +115,7 @@ ds = load_dataset('nebius/SWE-rebench')
|
|
| 114 |
```
|
| 115 |
|
| 116 |
# Dataset Structure
|
| 117 |
-
The dataset
|
| 118 |
|
| 119 |
| Field name | Type | Description |
|
| 120 |
|----------------------------|--------|-------------------------------------------------------------------------------------------------|
|
|
@@ -136,7 +137,21 @@ The dataset contains the following fields. It includes all fields from SWE-bench
|
|
| 136 |
| `requirements` | str | Freezed requirements for the repository. |
|
| 137 |
| `environment` | str | Environment configuration for the repository. |
|
| 138 |
|
| 139 |
-
To execute
|
| 140 |
|
| 141 |
# License
|
| 142 |
-
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
data_files:
|
| 95 |
- split: test
|
| 96 |
path: data/test-*
|
| 97 |
+
license: cc-by-4.0
|
| 98 |
---
|
|
|
|
|
|
|
| 99 |
|
|
|
|
| 100 |
|
| 101 |
+
# Dataset Summary
|
| 102 |
|
| 103 |
+
SWE-rebench is a large-scale dataset designed to support training and evaluation of LLM-based software engineering (SWE) agents. It is constructed using a fully automated pipeline that continuously extracts real-world interactive SWE tasks from GitHub repositories at scale, as detailed in our paper [SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents](https://arxiv.org/abs/2505.20411). The dataset currently comprises over 21,000 issue–pull request pairs from 3,400+ Python repositories, each validated for correctness through automated environment setup and test execution. A curated subset of these tasks also forms the basis of our continuously updated [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard).
|
| 104 |
+
SWE-rebench builds upon and extends the methodology of [SWE-bench](https://www.swebench.com/) by incorporating several key enhancements detailed in our paper, including:
|
| 105 |
|
| 106 |
+
* A fully automated pipeline for continuous task collection.
|
| 107 |
+
* LLM-driven extraction and validation of environment installation instructions.
|
| 108 |
+
* An automated LLM-based task quality assessment pipeline that annotates tasks with labels such as clarity, complexity, or test patch validity.
|
| 109 |
|
| 110 |
# How to Use
|
| 111 |
|
|
|
|
| 115 |
```
|
| 116 |
|
| 117 |
# Dataset Structure
|
| 118 |
+
The SWE-rebench dataset schema extends the original SWE-bench schema with additional fields to support richer analysis. The complete schema is detailed in the table below. For more information about this data and methodology behind collecting it, please refer to our paper.
|
| 119 |
|
| 120 |
| Field name | Type | Description |
|
| 121 |
|----------------------------|--------|-------------------------------------------------------------------------------------------------|
|
|
|
|
| 137 |
| `requirements` | str | Freezed requirements for the repository. |
|
| 138 |
| `environment` | str | Environment configuration for the repository. |
|
| 139 |
|
| 140 |
+
To execute tasks from SWE-rebench (i.e., set up their environments, apply patches, and run tests), we provide a [fork](https://github.com/SWE-rebench/SWE-bench-fork) of the original SWE-bench execution framework, adapted for our dataset's structure and features.
|
| 141 |
|
| 142 |
# License
|
| 143 |
+
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.
|
| 144 |
+
|
| 145 |
+
# Citation
|
| 146 |
+
|
| 147 |
+
```bibtex
|
| 148 |
+
@misc{badertdinov2025swerebenchautomatedpipelinetask,
|
| 149 |
+
title={SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents},
|
| 150 |
+
author={Ibragim Badertdinov and Alexander Golubev and Maksim Nekrashevich and Anton Shevtsov and Simon Karasik and Andrei Andriushchenko and Maria Trofimova and Daria Litvintseva and Boris Yangel},
|
| 151 |
+
year={2025},
|
| 152 |
+
eprint={2505.20411},
|
| 153 |
+
archivePrefix={arXiv},
|
| 154 |
+
primaryClass={cs.SE},
|
| 155 |
+
url={https://arxiv.org/abs/2505.20411}
|
| 156 |
+
}
|
| 157 |
+
```
|