smajumdar94 commited on
Commit
d96318c
·
verified ·
1 Parent(s): 246a4a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +232 -3
README.md CHANGED
@@ -1,3 +1,232 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - Qwen/Qwen2.5-7B-Instruct
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - nvidia
11
+ - code
12
+ ---
13
+ # OpenReasoning-Nemotron-7B Overview
14
+
15
+ ## Description: <br>
16
+ OpenReasoning-Nemotron-7B is a large language model (LLM) which is a derivative of Qwen2.5-7B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning about math, code and science solution generation. The model supports a context length of 64K tokens. The OpenReasoning model is available in the following sizes: 1.5B, 7B and 14B and 32B. <br>
17
+
18
+ This model is ready for commercial/non-commercial research use. <br>
19
+
20
+ ### License/Terms of Use: <br>
21
+ GOVERNING TERMS: Use of the models listed above are governed by the [Creative Commons Attribution 4.0 International License (CC-BY-4.0)](https://creativecommons.org/licenses/by/4.0/legalcode.en). ADDITIONAL INFORMATION: [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE)
22
+
23
+ ## Scores on Reasoning Benchmarks
24
+
25
+ | **Model** | **AritificalAnalysisIndex** | **GPQA** | **MMLU-PRO** | **HLE** | **LiveCodeBench** | **SciCode** | **AIME24** | **AIME25** | **HMMT FEB 25** | **BRUMO25** |
26
+ | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
27
+ | **1.5B**| - | 31.6 | 47.5 | 5.5 | 28.6 | 2.2 | 55.5 | 45.6 | 31.5 | 50.6 |
28
+ | **7B** | 54.7 | 61.1 | 71.9 | 8.3 | 63.3 | 16.2 | 84.7 | 78.2 | 63.5 | 80.3 |
29
+ | **14B** | 60.9 | 71.6 | 77.5 | 10.1 | 67.8 | 23.5 | 87.8 | 82.0 | 71.2 | 87.7 |
30
+ | **32B** | 64.3 | 73.1 | 80.0 | 11.9 | 70.2 | 28.5 | 89.2 | 84.0 | 73.8 | 88.0 |
31
+
32
+ ## Scores for Math Reasoning Benchmarks with GenSelect
33
+
34
+ | **Model** | **Pass@1 (Avg@64)** | **Majority@64** | **GenSelect@64** |
35
+ | :--- | :--- | :--- | :--- |
36
+ | **1.5B** | | | |
37
+ | **AIME24** | 55.5 | 76.7 | 76.7 |
38
+ | **AIME25** | 45.6 | 70.0 | 70.0 |
39
+ | **HMMT Feb 25** | 31.5 | 46.7 | 53.3 |
40
+ | **BRUNO25** | 50.6 | 70.0 | 73.3 |
41
+ | **7B** | | | |
42
+ | **AIME24** | 84.7 | 93.3 | 93.3 |
43
+ | **AIME25** | 78.2 | 86.7 | 93.3 |
44
+ | **HMMT Feb 25** | 63.5 | 83.3 | 90.0 |
45
+ | **BRUNO25** | 80.3 | 93.3 | 96.7 |
46
+ | **14B** | | | |
47
+ | **AIME24** | 87.8 | 93.3 | 93.3 |
48
+ | **AIME25** | 82.0 | 90.0 | 90.0 |
49
+ | **HMMT Feb 25** | 71.2 | 86.7 | 93.3 |
50
+ | **BRUNO25** | 87.7 | 96.7 | 96.7 |
51
+ | **32B** | | | |
52
+ | **AIME24** | 89.2 | 93.3 | 93.3 |
53
+ | **AIME25** | 84.0 | 90.0 | 93.3 |
54
+ | **HMMT Feb 25** | 73.8 | 86.7 | 96.7 |
55
+ | **BRUNO25** | 88.0 | 96.7 | 100.0 |
56
+
57
+
58
+ ## How to use the models?
59
+
60
+ To run inference on coding problems:
61
+
62
+ ````python
63
+ import transformers
64
+ import torch
65
+ model_id = "nvidia/OpenReasoning-Nemotron-7B"
66
+ pipeline = transformers.pipeline(
67
+ "text-generation",
68
+ model=model_id,
69
+ model_kwargs={"torch_dtype": torch.bfloat16},
70
+ device_map="auto",
71
+ )
72
+ # Code generation prompt
73
+ prompt = """You are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.
74
+ Please use python programming language only.
75
+ You must use ```python for just the final solution code block with the following format:
76
+ ```python
77
+ # Your code here
78
+ ```
79
+ {user}
80
+ """
81
+ messages = [
82
+ {
83
+ "role": "user",
84
+ "content": prompt.format(user="Write a program to calculate the sum of the first $N$ fibonacci numbers")},
85
+ ]
86
+ outputs = pipeline(
87
+ messages,
88
+ max_new_tokens=64000,
89
+ )
90
+ print(outputs[0]["generated_text"][-1]['content'])
91
+ ````
92
+
93
+ ## Citation
94
+
95
+ If you find the data useful, please cite:
96
+ ```
97
+ @article{ahmad2025opencodereasoning,
98
+ title={OpenCodeReasoning: Advancing Data Distillation for Competitive Coding},
99
+ author={Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, Boris Ginsburg},
100
+ year={2025},
101
+ eprint={2504.01943},
102
+ archivePrefix={arXiv},
103
+ primaryClass={cs.CL},
104
+ url={https://arxiv.org/abs/2504.01943},
105
+ }
106
+ ```
107
+
108
+ ```
109
+ @misc{ahmad2025opencodereasoningiisimpletesttime,
110
+ title={OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique},
111
+ author={Wasi Uddin Ahmad and Somshubra Majumdar and Aleksander Ficek and Sean Narenthiran and Mehrzad Samadi and Jocelyn Huang and Siddhartha Jain and Vahid Noroozi and Boris Ginsburg},
112
+ year={2025},
113
+ eprint={2507.09075},
114
+ archivePrefix={arXiv},
115
+ primaryClass={cs.CL},
116
+ url={https://arxiv.org/abs/2507.09075},
117
+ }
118
+ ```
119
+
120
+ ```
121
+ @misc{moshkov2025aimo2winningsolutionbuilding,
122
+ title={AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning dataset},
123
+ author={Ivan Moshkov and Darragh Hanley and Ivan Sorokin and Shubham Toshniwal and Christof Henkel and Benedikt Schifferer and Wei Du and Igor Gitman},
124
+ year={2025},
125
+ eprint={2504.16891},
126
+ archivePrefix={arXiv},
127
+ primaryClass={cs.AI},
128
+ url={https://arxiv.org/abs/2504.16891},
129
+ }
130
+ ```
131
+
132
+ ## Additional Information:
133
+
134
+ ### Deployment Geography:
135
+ Global<br>
136
+
137
+ ### Use Case: <br>
138
+ This model is intended for developers and researchers who work on competitive math, code and science problems. It has been trained via only supervised fine-tuning to achieve strong scores on benchmarks. <br>
139
+
140
+ ### Release Date: <br>
141
+ Huggingface [07/16/2025] via https://huggingface.co/nvidia/OpenReasoning-Nemotron-7B/ <br>
142
+
143
+ ## Reference(s):
144
+ [2504.01943] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
145
+ [2504.01943] OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
146
+ [2504.16891] AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models with OpenMathReasoning dataset
147
+ <br>
148
+
149
+ ## Model Architecture: <br>
150
+ Architecture Type: Dense decoder-only Transformer model
151
+ Network Architecture: Qwen-7B-Instruct
152
+ <br>
153
+ **This model was developed based on Qwen2.5-7B-Instruct and has 7B model parameters. <br>**
154
+
155
+ **OpenReasoning-Nemotron-1.5B was developed based on Qwen2.5-1.5B-Instruct and has 1.5B model parameters. <br>**
156
+
157
+ **OpenReasoning-Nemotron-7B was developed based on Qwen2.5-7B-Instruct and has 7B model parameters. <br>**
158
+
159
+ **OpenReasoning-Nemotron-14B was developed based on Qwen2.5-14B-Instruct and has 14B model parameters. <br>**
160
+
161
+ **OpenReasoning-Nemotron-32B was developed based on Qwen2.5-32B-Instruct and has 32B model parameters. <br>**
162
+
163
+ ## Input: <br>
164
+ **Input Type(s):** Text <br>
165
+ **Input Format(s):** String <br>
166
+ **Input Parameters:** One-Dimensional (1D) <br>
167
+ **Other Properties Related to Input:** Context length up to 64,000 tokens <br>
168
+
169
+ ## Output: <br>
170
+ **Output Type(s):** Text <br>
171
+ **Output Format:** String <br>
172
+ **Output Parameters:** One-Dimensional (1D) <br>
173
+ **Other Properties Related to Output:** Context length up to 64,000 tokens <br>
174
+
175
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
176
+
177
+ ## Software Integration : <br>
178
+ * Runtime Engine: NeMo 2.3.0 <br>
179
+ * Recommended Hardware Microarchitecture Compatibility: <br>
180
+ NVIDIA Ampere <br>
181
+ NVIDIA Hopper <br>
182
+ * Preferred/Supported Operating System(s): Linux <br>
183
+
184
+ ## Model Version(s):
185
+ 1.0 (7/16/2025) <br>
186
+ OpenReasoning-Nemotron-32B<br>
187
+ OpenReasoning-Nemotron-14B<br>
188
+ OpenReasoning-Nemotron-7B<br>
189
+ OpenReasoning-Nemotron-1.5B<br>
190
+
191
+ # Training and Evaluation Datasets: <br>
192
+
193
+ ## Training Dataset:
194
+
195
+ The training corpus for OpenReasoning-Nemotron-7B is comprised of questions from [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) dataset, [OpenCodeReasoning-II](https://arxiv.org/abs/2507.09075), [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning), and the Synthetic Science questions from the [Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset). All responses are generated using DeepSeek-R1-0528. We also include the instruction following and tool calling data from Llama-Nemotron-Post-Training-Dataset without modification.
196
+
197
+ Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
198
+ Labeling Method: Hybrid: Automated, Human, Synthetic <br>
199
+ Properties: 5M DeepSeek-R1-0528 generated responses from OpenCodeReasoning questions (https://huggingface.co/datasets/nvidia/OpenCodeReasoning), [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning), and the Synthetic Science questions from the [Llama-Nemotron-Post-Training-Dataset](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset). We also include the instruction following and tool calling data from Llama-Nemotron-Post-Training-Dataset without modification.
200
+
201
+ ## Evaluation Dataset:
202
+ We used the following benchmarks to evaluate the model holistically.
203
+
204
+ ### Math
205
+ - AIME 2024/2025 <br>
206
+ - HMMT <br>
207
+ - BRUNO 2025 <br>
208
+
209
+ ### Code
210
+ - LiveCodeBench <br>
211
+ - SciCode <br>
212
+
213
+ ### Science
214
+ - GPQA <br>
215
+ - MMLU-PRO <br>
216
+ - HLE <br>
217
+
218
+
219
+ Data Collection Method: Hybrid: Automated, Human, Synthetic <br>
220
+ Labeling Method: Hybrid: Automated, Human, Synthetic <br>
221
+
222
+ ## Inference:
223
+ **Acceleration Engine:** vLLM, Tensor(RT)-LLM <br>
224
+ **Test Hardware** NVIDIA H100-80GB <br>
225
+
226
+ ## Ethical Considerations:
227
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
228
+
229
+ For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.
230
+
231
+ Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
232
+