Triangle104 commited on
Commit
b2d5189
·
verified ·
1 Parent(s): 017059c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -22,6 +22,10 @@ base_model: nvidia/AceReason-Nemotron-7B
22
  This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-7B`](https://huggingface.co/nvidia/AceReason-Nemotron-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
23
  Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-7B) for more details on the model.
24
 
 
 
 
 
25
  ## Use with llama.cpp
26
  Install llama.cpp through brew (works on Mac and Linux)
27
 
 
22
  This model was converted to GGUF format from [`nvidia/AceReason-Nemotron-7B`](https://huggingface.co/nvidia/AceReason-Nemotron-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
23
  Refer to the [original model card](https://huggingface.co/nvidia/AceReason-Nemotron-7B) for more details on the model.
24
 
25
+ ---
26
+ We're thrilled to introduce AceReason-Nemotron-7B, a math and code reasoning model trained entirely through reinforcement learning (RL), starting from the DeepSeek-R1-Distilled-Qwen-7B. It delivers impressive results, achieving 69.0% on AIME 2024 (+14.5%), 53.6% on AIME 2025 (+17.4%), 51.8% on LiveCodeBench v5 (+8%), 44.1% on LiveCodeBench v6 (+7%). We systematically study the RL training process through extensive ablations and propose a simple yet effective approach: first RL training on math-only prompts, then RL training on code-only prompts. Notably, we find that math-only RL not only significantly enhances the performance of strong distilled models on math benchmarks, but also code reasoning tasks. In addition, extended code-only RL further improves code benchmark performance while causing minimal degradation in math results. We find that RL not only elicits the foundational reasoning capabilities acquired during pre-training and supervised fine-tuning (e.g., distillation), but also pushes the limits of the model's reasoning ability, enabling it to solve problems that were previously unsolvable.
27
+
28
+ ---
29
  ## Use with llama.cpp
30
  Install llama.cpp through brew (works on Mac and Linux)
31