Update README.md
Browse files
README.md
CHANGED
@@ -25,8 +25,21 @@ OlympicCoder-32B is a code mode that achieves very strong performance on competi
|
|
25 |
|
26 |
## Evaluation
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |

|
29 |
|
|
|
|
|
|
|
30 |
|
31 |
|
32 |
|
|
|
25 |
|
26 |
## Evaluation
|
27 |
|
28 |
+
We compare the performance of OlympicCoder models on two main benchmarks for competitive coding:
|
29 |
+
|
30 |
+
* **[IOI'2024:](https://github.com/huggingface/ioi)** 6 very challenging problems from the 2024 International Olympiad in Informatics. Models are allowed up to 50 submissions per problem.
|
31 |
+
* **[LiveCodeBench:](https://livecodebench.github.io)** Python programming problems source from platforms like CodeForces and LeetCoder. We use the `v4_v5` subset of [`livecodebench/code_generation_lite`](https://huggingface.co/datasets/livecodebench/code_generation_lite), which corresponds to 268 problems. We use `lighteval` to evaluate models on LiveCodeBench using the sampling parameters described [here](https://github.com/huggingface/open-r1?tab=readme-ov-file#livecodebench).
|
32 |
+
|
33 |
+
> [!NOTE]
|
34 |
+
> The OlympicCoder models were post-trained exclusively on C++ solutions generated by DeepSeek-R1. As a result the performance on LiveCodeBench should be considered to be partially _out-of-domain_, since this expects models to output solutions in Python.
|
35 |
+
|
36 |
+
### IOI'24
|
37 |
+
|
38 |

|
39 |
|
40 |
+
### LiveCodeBench
|
41 |
+
|
42 |
+

|
43 |
|
44 |
|
45 |
|