Datasets:
Commit
·
1c7c7e7
1
Parent(s):
119392f
Update README.md
Browse files
README.md
CHANGED
|
@@ -85,3 +85,63 @@ configs:
|
|
| 85 |
- split: train
|
| 86 |
path: data/train-*
|
| 87 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
- split: train
|
| 86 |
path: data/train-*
|
| 87 |
---
|
| 88 |
+
# Dataset Summary
|
| 89 |
+
SWE-bench Extra V2 is a dataset that can be used to train or evaluate agentic systems specializing in resolving GitHub issues. It is based on the methodology used to build SWE-bench benchmark and our previous dataset SWE-bench Extra and includes 21,000 Issue-Pull Request pairs sourced from 6,000 Python repositories.
|
| 90 |
+
|
| 91 |
+
# Dataset Description
|
| 92 |
+
The SWE-bench Extra v2 dataset supports the development of software engineering agents capable of autonomously solving GitHub issues. The data collection process, based on the SWE-bench methodology, involves the following steps:
|
| 93 |
+
|
| 94 |
+
1. **Issue and Pull Request Collection**: Issues are gathered and linked with pull requests that successfully resolve them.
|
| 95 |
+
2. **Filtering**: Instances are filtered based on attributes such as issue descriptions, relevant code paths, and test patches.
|
| 96 |
+
3. **Automated extracting of project dependencies**: Extracting project dependencies using LLM.
|
| 97 |
+
4. **Execution-based Validation**: The project environments are set up and tests are run to verify that they execute correctly.
|
| 98 |
+
|
| 99 |
+
For a more detailed description of the data collection process, please refer to our blog post (TBD) [Scaling data collection for training software engineering agents](https://nebius.com/blog/posts/scaling-data-collection-for-training-swe-agents).
|
| 100 |
+
|
| 101 |
+
# How to Use
|
| 102 |
+
|
| 103 |
+
```python
|
| 104 |
+
from datasets import load_dataset
|
| 105 |
+
ds = load_dataset('nebius/SWE-bench-extra-v2')
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
# Dataset Statistics
|
| 109 |
+
Average, 75th percentile, and maximum values characterizing various attributes of the collected instances. Statistics are micro-averaged without grouping by repository.
|
| 110 |
+
|
| 111 |
+
| Data | Type | Mean | p75 | Max |
|
| 112 |
+
|---------------|--------------------|----------|----------|-----------|
|
| 113 |
+
| Issue text | Length (words) | 111.5 | 146 | 1,294 |
|
| 114 |
+
| Code base | Files (Non-test) | 71.71 | 72.00 | 2,264 |
|
| 115 |
+
| | Lines (Non-test) | 15,163.38| 13,777 | 1,039,288 |
|
| 116 |
+
| Gold patch | Files edited | 2.6 | 3 | 7 |
|
| 117 |
+
| | Lines edited | 56 | 76 | 300 |
|
| 118 |
+
| Tests | Fail to Pass | 10.94 | 5 | 4,941 |
|
| 119 |
+
| | Total | 58.5 | 49 | 7,820 |
|
| 120 |
+
|
| 121 |
+
# Dataset Structure
|
| 122 |
+
The dataset contains the following fields. It includes all fields from SWE-bench and adds a `meta` column, which indicates whether the instance meets the "lite" criteria and, if not, lists the failed validators.
|
| 123 |
+
|
| 124 |
+
| Field name | Type | Description |
|
| 125 |
+
|----------------------------|--------|-------------------------------------------------------------------------------------------------|
|
| 126 |
+
| `instance_id` | str | A formatted instance identifier, usually as `repo_owner__repo_name-PR-number`. |
|
| 127 |
+
| `patch` | str | The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. |
|
| 128 |
+
| `repo` | str | The repository owner/name identifier from GitHub. |
|
| 129 |
+
| `base_commit` | str | The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. |
|
| 130 |
+
| `hints_text` | str | Comments made on the issue prior to the creation of the solution PR’s first commit creation date. |
|
| 131 |
+
| `created_at` | str | The creation date of the pull request. |
|
| 132 |
+
| `test_patch` | str | A test-file patch that was contributed by the solution PR. |
|
| 133 |
+
| `problem_statement` | str | The issue title and body. |
|
| 134 |
+
| `version` | str | Installation version to use for running evaluation. |
|
| 135 |
+
| `environment_setup_commit` | str | Commit hash to use for environment setup and installation. |
|
| 136 |
+
| `FAIL_TO_PASS` | str | A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. |
|
| 137 |
+
| `PASS_TO_PASS` | str | A JSON list of strings that represent tests that should pass before and after the PR application. |
|
| 138 |
+
| `meta` | str | A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not. |
|
| 139 |
+
| `license_name` | str | The type of license of the repository. |
|
| 140 |
+
| `install_config` | str | Installation configuration for setting up the repository. |
|
| 141 |
+
| `requirements` | str | Freezed requirements for the repository. |
|
| 142 |
+
| `environment` | str | Environment configuration for the repository. |
|
| 143 |
+
|
| 144 |
+
To execute instances within SWE-bench, you need to provide a default recipe for dependency installation. The constants required for running these instances are described in this [constants.py](https://huggingface.co/datasets/nebius/SWE-bench-extra/blob/main/constants.py).
|
| 145 |
+
|
| 146 |
+
# License
|
| 147 |
+
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.
|