Update README.md
Browse files
README.md
CHANGED
@@ -70,13 +70,24 @@ leaderboard please create a PR with the log file of the run and details about th
|
|
70 |
|
71 |
If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.6 on this benchmark.
|
72 |
We can validate it by running the evaluation script with `--oracle` flag.
|
73 |
-
The oracle run log is available [here]().
|
74 |
|
75 |
# Leaderboard
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
| Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
|
78 |
|:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
|
79 |
-
| gpt-4o-mini-2024-07-18 | 32.
|
80 |
| gpt-4o-2024-08-06 | 33.13 | 1.68 | 15.36 | 3.59 | 14.81 | 40.00 | 23.91 | 74.50 | **8.36** | 44.33 | [link](gpt-4o-2024-08-06_results_20240912_155645.log) |
|
81 |
| gemini-1.5-flash-8b-exp-0827 | 32.12 | 1.36 | 14.66 | 3.31 | 14.14 | 38.31 | 23.00 | 70.00 | 7.43 | **46.47** | [link](gemini-1.5-flash-8b-exp-0827_results_20240912_134026.log) |
|
82 |
| **gemini-1.5-flash-exp-0827** | **33.43** | 1.66 | **16.00** | 3.88 | **15.33** | **41.87** | 23.59 | **76.50** | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
|
|
|
70 |
|
71 |
If we use the existing README.md files in the repositories as the golden output, we would get a score of 56.6 on this benchmark.
|
72 |
We can validate it by running the evaluation script with `--oracle` flag.
|
73 |
+
The oracle run log is available [here](oracle_results_20240912_155859.log).
|
74 |
|
75 |
# Leaderboard
|
76 |
|
77 |
+
bleu: 0.0164
|
78 |
+
rouge-1: 0.1546
|
79 |
+
rouge-2: 0.0385
|
80 |
+
rouge-l: 0.1484
|
81 |
+
cosine_similarity: 0.4057
|
82 |
+
structural_similarity: 0.2381
|
83 |
+
information_retrieval: 0.7250
|
84 |
+
code_consistency: 0.0477
|
85 |
+
readability: 0.4481
|
86 |
+
weighted_score: 0.3216
|
87 |
+
|
88 |
| Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
|
89 |
|:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
|
90 |
+
| gpt-4o-mini-2024-07-18 | 32.16 | 1.64 | 15.46 | 3.85 | 14.84 | 40.57 | 23.81 | 72.50 | 4.77 | 44.81 | [link](gpt-4o-mini-2024-07-18_results_20240912_161045.log) |
|
91 |
| gpt-4o-2024-08-06 | 33.13 | 1.68 | 15.36 | 3.59 | 14.81 | 40.00 | 23.91 | 74.50 | **8.36** | 44.33 | [link](gpt-4o-2024-08-06_results_20240912_155645.log) |
|
92 |
| gemini-1.5-flash-8b-exp-0827 | 32.12 | 1.36 | 14.66 | 3.31 | 14.14 | 38.31 | 23.00 | 70.00 | 7.43 | **46.47** | [link](gemini-1.5-flash-8b-exp-0827_results_20240912_134026.log) |
|
93 |
| **gemini-1.5-flash-exp-0827** | **33.43** | 1.66 | **16.00** | 3.88 | **15.33** | **41.87** | 23.59 | **76.50** | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
|