|
--- |
|
language: |
|
- en |
|
tags: |
|
- Compiler |
|
- LLVM |
|
- Intermediate Representation |
|
- IR |
|
- Path |
|
- Hot Path |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
dataset_info: |
|
features: |
|
- name: path |
|
dtype: string |
|
- name: count |
|
dtype: int64 |
|
- name: source_file |
|
dtype: string |
|
- name: label |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 3468576 |
|
num_examples: 1190 |
|
- name: validation |
|
num_bytes: 647074 |
|
num_examples: 211 |
|
- name: test |
|
num_bytes: 194998 |
|
num_examples: 160 |
|
download_size: 798471 |
|
dataset_size: 4310648 |
|
--- |
|
|
|
# Dataset Card for Compiler Hot Paths |
|
|
|
|
|
## Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
This dataset consists of 1561 compiler paths generated from 26 C programs in the [Polybench Benchmark Suite](https://github.com/MatthiasJReisinger/PolyBenchC-4.2.1) using the [Ball-Larus Algorithm](https://github.com/waker-he/ball-larus/tree/main). |
|
Each path, a sequence of LLVM IR instructions, is has three associated values: |
|
|
|
1. `count`, an integer indicating the number of times this path is executed in the original program. |
|
2. `source_file`, a string indicating which program was this path from. |
|
3. `label`, an integer of 0 or 1 indicating whether this path is "cold" or "hot" respectively. |
|
|
|
Note: 4 programs (`deriche`, `cholesky`, `gramschmidt`, `correlation`) were excluded because we encountered errors when running them. |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
This dataset was used to train/fine-tune machine learning models to perform hot path predictions: Given a path, predict whether it is "hot" or "cold". |
|
A path is considered "hot" if it is executed more than a threshold of *n* times, where we defined *n = 1*, otherwise it is considered "cold". |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
The dataset is split into train (1190, 75%), validation (211, 15%), and test (160, 10%) sets. The test set consists of paths from 4 programs (in PolyBench), namely, `jacobi-2d`, `syr2k`, `durbin`, `2mm`. |
|
These 4 programs were randomly selected to be the test set before generating the paths. This guarantees that the models have never seen the test set's programs. |
|
The train and validation sets consist of the remaining 22 programs, which were randomly split after generating the paths (while maintaining the hot-to-cold-paths ratio), |
|
meaning that some paths in the validation set and training set may come from the same C program. However, this likely won't be an issue since the paths themselves are distinct. |
|
|