project-edit / README.md
waynechi's picture
Update README.md
e5f17f7 verified
metadata
license:
  - apache-2.0
size_categories:
  - 100<n<1K
pretty_name: EditBench
tags:
  - code
  - code-editing
  - code-generation
metrics:
  - execution-accuracy

EditBench Dataset

This dataset contains code editing tasks extracted from the EditBench evaluation framework specifically designed for evaluating model performance on code editing tasks. It is provided as a test-only benchmark. Each sample includes:

Core Files (Python)

  • original_code.py: Starting code file
  • highlighted_code.py: Specific section of code to be modified
  • instruction.txt: User instructions for the task
  • test_code.py: Tests that validate the implementation

Supporting Files (Python)

  • requirements.txt: Dependencies needed to run the code
  • conftest.py: Pytest configuration
  • test_utils.py: Utilities for testing

Core Files (JavaScript)

  • original_code.js: Starting code file (or .jsx)
  • highlighted_code.js: Specific section of code to be modified
  • instruction.txt: User instructions for the task
  • test_code: Tests that validate the implementation (from tests/*.test.js)
  • package_json: NPM package configuration
  • jest_setup: Jest testing setup (if applicable)
  • babel_config: Babel configuration (if applicable)
  • other_files: Additional files needed for the project

Dataset Statistics

  • Total samples: 113
  • Python samples: 104
  • JavaScript samples: 9
  • Expected samples: 113 (57 easy + 56 hard questions)
  • Found samples: 113 / 113

Usage

This dataset is provided as a test-only benchmark and can be loaded directly with the Hugging Face Datasets library:

from datasets import load_dataset

# Note that this dataset only has a 'test' split
dataset = load_dataset("your-username/editbench", split="test")

Ethical Considerations and Limitations

  • This dataset is provided exclusively for benchmark/evaluation purposes
  • Models should NOT be trained on this dataset, as it is specifically designed to test model capabilities
  • Hugging Face's Terms of Service prohibit using benchmark datasets for training
  • We recommend implementing your model's training pipeline to explicitly exclude this dataset

Citation

If you use this dataset, please cite the original EditBench work.

@misc{chi2025editbench,
  title        = {EditBench: Evaluating LLM Abilities to Perform Real-World Code Edits},
  author       = {Wayne Chi and Valerie Chen and Ryan Shar and Aditya Mittal and Jenny Liang and Wei-Lin Chiang and Anastasios Nikolas Angelopoulos and Ion Stoica and Graham Neubig and Ameet Talwalkar and Chris Donahue},
  year         = {2025},
  note         = {arXiv preprint}
}

Usage Restrictions

This dataset is provided for research and evaluation purposes only. By using this dataset, you agree not to:

  1. Train models on it (it is a benchmark dataset)
  2. Scrape or incorporate it into pretraining data
  3. Use it for any purpose other than evaluation