codelion commited on
Commit
53ee409
1 Parent(s): 0978e2b

Update README.md (#3)

Browse files

- Update README.md (cbf7e7cc4f130f3b91d3c6c7dd12fb033f5113fc)

Files changed (1) hide show
  1. README.md +2 -13
README.md CHANGED
@@ -109,22 +109,11 @@ models except Google Gemini which has a context legnth of up to 2 Million tokens
109
  2) There is a trade-off in accuracy inherit in the benchmark as adding more examples makes some of the metrics like `information_retrieval`
110
  and `readability` worse. At larger contexts models do not have perfect recall and may miss important information.
111
 
112
- Our experiments with few-shot prompts confirm this, there is 1
113
-
114
- bleu: 0.1924
115
- rouge-1: 0.3231
116
- rouge-2: 0.2148
117
- rouge-l: 0.3174
118
- cosine_similarity: 0.6149
119
- structural_similarity: 0.3317
120
- information_retrieval: 0.5950
121
- code_consistency: 0.1148
122
- readability: 0.2765
123
- weighted_score: 0.3397
124
 
125
  | Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
126
  |:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
127
  | 0-shot-gemini-1.5-flash-exp-0827 | 33.43 | 1.66 | 16.00 | 3.88 | 15.33 | 41.87 | 23.59 | 76.50 | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
128
  | 1-shot-gemini-1.5-flash-exp-0827 | 35.40 | 21.81 | 34.00 | 24.97 | 33.61 | 61.53 | 37.60 | 61.00 | 12.89 | 27.22 | [link](1-shot-gemini-1.5-flash-exp-0827_results_20240912_183343.log) |
129
- | 3-shot-gemini-1.5-flash-exp-0827 | 33.43 | 1.66 | 16.00 | 3.88 | 15.33 | 41.87 | 23.59 | 76.50 | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
130
  | 5-shot-gemini-1.5-flash-exp-0827 | 33.97 | 19.24 | 32.31 | 21.48 | 31.74 | 61.49 | 33.17 | 59.50 | 11.48 | 27.65 | [link](5-shot-gemini-1.5-flash-exp-0827_results_20240912_180343.log) |
 
109
  2) There is a trade-off in accuracy inherit in the benchmark as adding more examples makes some of the metrics like `information_retrieval`
110
  and `readability` worse. At larger contexts models do not have perfect recall and may miss important information.
111
 
112
+ Our experiments with few-shot prompts confirm this, the maximum overall score is at 1-shot and adding more examples doesn't help after that.
 
 
 
 
 
 
 
 
 
 
 
113
 
114
  | Model | Score | BLEU | ROUGE-1 | ROUGE-2 | ROUGE-l | Cosine-Sim | Structural-Sim | Info-Ret | Code-Consistency | Readability | Logs |
115
  |:-----:|:-----:|:----:|:-------:|:-------:|:-------:|:----------:|:--------------:|:--------:|:----------------:|:-----------:|:----:|
116
  | 0-shot-gemini-1.5-flash-exp-0827 | 33.43 | 1.66 | 16.00 | 3.88 | 15.33 | 41.87 | 23.59 | 76.50 | 7.86 | 43.34 | [link](gemini-1.5-flash-exp-0827_results_20240912_144919.log) |
117
  | 1-shot-gemini-1.5-flash-exp-0827 | 35.40 | 21.81 | 34.00 | 24.97 | 33.61 | 61.53 | 37.60 | 61.00 | 12.89 | 27.22 | [link](1-shot-gemini-1.5-flash-exp-0827_results_20240912_183343.log) |
118
+ | 3-shot-gemini-1.5-flash-exp-0827 | 33.10 | 20.02 | 32.70 | 22.66 | 32.21 | 58.98 | 34.54 | 60.50 | 13.09 | 20.52 | [link](3-shot-gemini-1.5-flash-exp-0827_results_20240912_191049.log) |
119
  | 5-shot-gemini-1.5-flash-exp-0827 | 33.97 | 19.24 | 32.31 | 21.48 | 31.74 | 61.49 | 33.17 | 59.50 | 11.48 | 27.65 | [link](5-shot-gemini-1.5-flash-exp-0827_results_20240912_180343.log) |