Update README.md
Browse files
README.md
CHANGED
@@ -50,7 +50,7 @@ Please see our [paper](TODO) for more details on the evaluation setup.
|
|
50 |
| DeepSeek-R1 | 79.1 (86.7) | 64.3 (73.3) | 53.0 (59.2) | 10.5 (11.4) |
|
51 |
|
52 |
We used [a version of OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle) model to secure
|
53 |
-
the first place in AIMO-2 Kaggle competition!
|
54 |
|
55 |
## Reproducing our results
|
56 |
|
@@ -73,7 +73,7 @@ To run inference with CoT mode, you can use this example code snippet.
|
|
73 |
import transformers
|
74 |
import torch
|
75 |
|
76 |
-
model_id = "nvidia/OpenMath-Nemotron-
|
77 |
|
78 |
pipeline = transformers.pipeline(
|
79 |
"text-generation",
|
|
|
50 |
| DeepSeek-R1 | 79.1 (86.7) | 64.3 (73.3) | 53.0 (59.2) | 10.5 (11.4) |
|
51 |
|
52 |
We used [a version of OpenMath-Nemotron-14B](https://huggingface.co/nvidia/OpenMath-Nemotron-14B-Kaggle) model to secure
|
53 |
+
the first place in [AIMO-2 Kaggle competition](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/leaderboard)!
|
54 |
|
55 |
## Reproducing our results
|
56 |
|
|
|
73 |
import transformers
|
74 |
import torch
|
75 |
|
76 |
+
model_id = "nvidia/OpenMath-Nemotron-14B"
|
77 |
|
78 |
pipeline = transformers.pipeline(
|
79 |
"text-generation",
|