Datasets:
Add task category and library name to dataset card (#3)
Browse files- Add task category and library name to dataset card (46fd55e24ac44bce1a8ad54ccf561e6bd7f411d7)
Co-authored-by: Niels Rogge <[email protected]>
README.md
CHANGED
|
@@ -1,4 +1,8 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: instance_id
|
|
@@ -94,10 +98,8 @@ configs:
|
|
| 94 |
data_files:
|
| 95 |
- split: test
|
| 96 |
path: data/test-*
|
| 97 |
-
license: cc-by-4.0
|
| 98 |
---
|
| 99 |
|
| 100 |
-
|
| 101 |
# Dataset Summary
|
| 102 |
|
| 103 |
SWE-rebench is a large-scale dataset designed to support training and evaluation of LLM-based software engineering (SWE) agents, building upon and expanding our earlier release, [SWE-bench-extra](https://huggingface.co/datasets/nebius/SWE-bench-extra). It is constructed using a fully automated pipeline that continuously extracts real-world interactive SWE tasks from GitHub repositories at scale, as detailed in our paper [SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents](https://arxiv.org/abs/2505.20411). The dataset currently comprises over 21,000 issue–pull request pairs from 3,400+ Python repositories, each validated for correctness through automated environment setup and test execution. A curated subset of these tasks also forms the basis of our continuously updated [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard).
|
|
@@ -162,5 +164,4 @@ The dataset is licensed under the Creative Commons Attribution 4.0 license. Howe
|
|
| 162 |
archivePrefix={arXiv},
|
| 163 |
primaryClass={cs.SE},
|
| 164 |
url={https://arxiv.org/abs/2505.20411}
|
| 165 |
-
}
|
| 166 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- other
|
| 5 |
+
library_name: datasets
|
| 6 |
dataset_info:
|
| 7 |
features:
|
| 8 |
- name: instance_id
|
|
|
|
| 98 |
data_files:
|
| 99 |
- split: test
|
| 100 |
path: data/test-*
|
|
|
|
| 101 |
---
|
| 102 |
|
|
|
|
| 103 |
# Dataset Summary
|
| 104 |
|
| 105 |
SWE-rebench is a large-scale dataset designed to support training and evaluation of LLM-based software engineering (SWE) agents, building upon and expanding our earlier release, [SWE-bench-extra](https://huggingface.co/datasets/nebius/SWE-bench-extra). It is constructed using a fully automated pipeline that continuously extracts real-world interactive SWE tasks from GitHub repositories at scale, as detailed in our paper [SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents](https://arxiv.org/abs/2505.20411). The dataset currently comprises over 21,000 issue–pull request pairs from 3,400+ Python repositories, each validated for correctness through automated environment setup and test execution. A curated subset of these tasks also forms the basis of our continuously updated [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard).
|
|
|
|
| 164 |
archivePrefix={arXiv},
|
| 165 |
primaryClass={cs.SE},
|
| 166 |
url={https://arxiv.org/abs/2505.20411}
|
| 167 |
+
}
|
|
|