File size: 4,129 Bytes
adf7c9a 0daa810 adf7c9a 0daa810 36d5034 adf7c9a 36d5034 30c66a8 ebe421d 30c66a8 36d5034 93c9409 30c66a8 ebe421d 2b65111 28e96aa 2b65111 77e3047 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
dataset_info:
features:
- name: repo_name
dtype: string
- name: repo_commit
dtype: string
- name: repo_content
dtype: string
- name: repo_readme
dtype: string
splits:
- name: train
num_bytes: 29227644
num_examples: 158
- name: test
num_bytes: 8765331
num_examples: 40
download_size: 12307532
dataset_size: 37992975
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: apache-2.0
task_categories:
- summarization
tags:
- code
size_categories:
- n<1K
---
# Generate README Eval
The generate-readme-eval is a dataset (train split) and benchmark (test split) to evaluate the effectiveness of LLMs
when summarizing entire GitHub repos in form of a README.md file. The datset is curated from top 400 real Python repositories
from GitHub with at least 1000 stars and 100 forks. The script used to generate the dataset can be found [here](_script_for_gen.py).
For the dataset we restrict ourselves to GH repositories that are less than 100k tokens in size to allow us to put the entire repo
in the context of LLM in a single call. The `train` split of the dataset can be used to fine-tune your own model, the results
reported here are for the `test` split.
To evaluate a LLM on the benchmark we can use the evaluation script given [here](_script_for_eval.py). During evaluation we prompt
the LLM to generate a structured README.md file using the entire contents of the repository (`repo_content`). We evaluate the output
response from LLM by comparing it with the actual README file of that repository across several different metrics.
In addition to the traditional NLP metircs like BLEU, ROUGE scores and cosine similarity, we also compute custom metrics
that capture structural similarity, code consistency, readbility and information retrieval (from code to README). The final score
is generated between by taking a weighted average of the metrics. The weights used for the final score are shown below.
```
weights = {
'bleu': 0.1,
'rouge-1': 0.033,
'rouge-2': 0.033,
'rouge-l': 0.034,
'cosine_similarity': 0.1,
'structural_similarity': 0.1,
'information_retrieval': 0.2,
'code_consistency': 0.2,
'readability': 0.2
}
```
At the end of evaluation the script will print the metrics and store the entire run in a log file. If you want to add your model to the
leaderboard please create a PR with the log file of the run and details about the model.
If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.6 on this benchmark.
We can validate it by running the evaluation script with `--oracle` flag.
The oracle run log is available [here](oracle_results_20240912_155859.log).
# Leaderboard
| Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
|:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
| gpt-4o-mini-2024-07-18 | 32.16 | 1.64 | 15.46 | 3.85 | 14.84 | 40.57 | 23.81 | 72.50 | 4.77 | 44.81 | [link](gpt-4o-mini-2024-07-18_results_20240912_161045.log) |
| gpt-4o-2024-08-06 | 33.13 | 1.68 | 15.36 | 3.59 | 14.81 | 40.00 | 23.91 | 74.50 | **8.36** | 44.33 | [link](gpt-4o-2024-08-06_results_20240912_155645.log) |
| gemini-1.5-flash-8b-exp-0827 | 32.12 | 1.36 | 14.66 | 3.31 | 14.14 | 38.31 | 23.00 | 70.00 | 7.43 | **46.47** | [link](gemini-1.5-flash-8b-exp-0827_results_20240912_134026.log) |
| **gemini-1.5-flash-exp-0827** | **33.43** | 1.66 | **16.00** | 3.88 | **15.33** | **41.87** | 23.59 | **76.50** | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
| gemini-1.5-pro-exp-0827 | 32.51 | **2.55** | 15.27 | **4.97** | 14.86 | 41.09 | **23.94** | 72.82 | 6.73 | 43.34 | [link](gemini-1.5-pro-exp-0827_results_20240912_141225.log) |
| oracle-score | 56.79 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | 98.24 | 59.00 | 11.01 | 14.84 | [link](oracle_results_20240912_155859.log) | |