license: apache-2.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: repo
dtype: string
- name: instance_id
dtype: string
- name: base_commit
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: int64
- name: labels
sequence: string
- name: category
dtype: string
- name: edit_functions
sequence: string
- name: added_functions
sequence: string
splits:
- name: test
num_bytes: 9540894
num_examples: 660
download_size: 3490581
dataset_size: 9540894
LOC-BENCH: A Benchmark for Code Localization
LOC-BENCH is a dataset specifically designed for evaluating code localization methods in software repositories. LOC-BENCH provides a diverse set of issues, including bug reports, feature requests, security vulnerabilities, and performance optimizations.
Please refer to the official version Loc-Bench_V1
for evaluating code localization methods and for easy comparison with our approach.
Code: https://github.com/gersteinlab/LocAgent
π Details
This is the dataset that was used in the early version of our paper.
We later released a refined version, czlll/Loc-Bench_V1
, with improved data quality by filtering out examples that do not modify any functions.
We recommend using the refined dataset to evaluate code localization performance.
The table below shows the distribution of categories in the dataset Loc-Bench_V0.1
.
category | count |
---|---|
Bug Report | 282 |
Feature Request | 203 |
Performance Issue | 144 |
Security Vulnerability | 31 |
π§ How to Use
You can easily load LOC-BENCH using Hugging Face's datasets library:
from datasets import load_dataset
dataset = load_dataset("czlll/Loc-Bench_V0.1", split='test')
π Citation
If you use LOC-BENCH in your research, please cite our paper:
@article{chen2025locagent,
title={LocAgent: Graph-Guided LLM Agents for Code Localization},
author={Chen, Zhaoling and Tang,Xiangru and Deng,Gangda and Wu,Fang and Wu,Jialong and Jiang,Zhiwei and Prasanna,Viktor and Cohan,Arman and Wang,Xingyao},
journal={arXiv preprint arXiv:2503.09089},
year={2025}
}