--- license: other --- # Summary of the Dataset ## Description Stack-Repo is a dataset of 200 Java repositories from GitHub with permissive licenses and near-deduplicated files that are augmented with three types of repository contexts. - Prompt Proposal (PP) Contexts: These contexts are based on the prompt proposals from the paper [Repository-Level Prompt Generation for Large Language Models of Code](https://arxiv.org/abs/2206.12839). - BM25 Contexts: These contexts are obtained based on the BM25 similarity scores. - RandomNN Contexts: These contexts are obtained using the nearest neighbors in the representation space of an embedding model. For more details, please check our paper [RepoFusion: Training Code Models to Understand Your Repository](https://arxiv.org/abs/2306.10998). The original Java source files are obtained using a [modified version](https://huggingface.co/datasets/bigcode/the-stack-dedup) of [The Stack](https://huggingface.co/datasets/bigcode/the-stack). ## Data Splits The dataset consists of three splits: `train`, `validation` and `test`, comprising of 100, 50, and 50 repositories, respectively. ## Data Organization Each split contains separate folder for a repository where each repository contains all `.java` source code files in the repository in the original directory structure along with three `.json` files corresponding to the PP, BM25 and RandomNN repo contexts. In terms of the HuggingFace Datasets terminology, we have four subdatasets or configurations. - `PP_contexts`: Propmt Proposal repo contexts. - `bm25_contexts`: BM25 repo contexts. - `randomNN_contexts`: RandomNN repo contexts. - `sources`: actual java (`.java`) source code files # Dataset Usage To clone the dataset locally ``` git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo ``` To load the dataset desired configuration and split: ```python import datasets ds = datasets.load_dataset( "RepoFusion/Stack-Repo", name="", split="" data_dir="" ) ``` NOTE: The configurations for the repo contexts `bm25_contexts`, `PP_contexts` and `randomNN_contexts` can be loaded directly by specifying the corresponding `` along with the `` in the load_dataset command listed above without cloning the repo locally. For the `sources` if not cloned beforehand or `data_dir` not specified, `ManualDownloadError` will be raised. ## Data Format The expected data format of the `.json` files is a list of target holes and corresponding repo contexts where each entry in the `.json` file corresponds to a target hole consisting of the location of the target hole, the target hole as a string, the surrounding context as a string and a list of repo-contexts as strings. Specifically, each row is a dictionary containing - `id`: hole_id (location of the target hole) - `question`: surrounding context - `target`: target hole - `ctxs`: a list of repo contexts where each item is a dictionary containing - `title`: name of the repo context - `text`: content of the repo context The actual java sources can be accessed via file system directly. The format is like this `[/data////.java]`. When accessed through `Datasets.load_dataset`, the data fields for the `sources` can be specified as below. ```python features = datasets.Features({ 'file': datasets.Value('string'), 'content': datasets.Value('string') }) ``` When accessed through `Datasets.load_dataset`, the data fields for the repo contexts can be specified as below. ```python features = datasets.Features({ 'id': datasets.Value('string'), 'hole_file': datasets.Value('string'), 'hole_line': datasets.Value('int32'), 'hole_pos': datasets.Value('int32'), 'question': datasets.Value('string'), 'target': datasets.Value('string'), 'answers': datasets.Sequence( datasets.Value('string') ), 'ctxs': [{ 'title': datasets.Value('string'), 'text': datasets.Value('string'), 'score': datasets.Value('float64') }] }) ``` # Additional Information ## Dataset Curators - Disha Shrivastava, dishu.905@gmail.com - Denis Kocetkov, denis.kocetkov@servicenow.com ## Licensing Information Stack-Repo is derived from a modified version of The Stack. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json). ## Citation ``` @article{shrivastava2023repofusion, title={RepoFusion: Training Code Models to Understand Your Repository}, author={Shrivastava, Disha and Kocetkov, Denis and de Vries, Harm and Bahdanau, Dzmitry and Scholak, Torsten}, journal={arXiv preprint arXiv:2306.10998}, year={2023} } ```