Add rStar-Coder dataset card
Browse filesThis PR adds a dataset card for the rStar-Coder dataset, including metadata and a description based on the provided paper abstract and Github README. The current README focuses on a different dataset (USACO 2025). This new card provides the correct information for rStar-Coder.
README.md
CHANGED
@@ -1,40 +1,31 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
-
|
10 |
-
|
11 |
-
- `test_data_link`: Link to test data
|
12 |
-
- `solution_link`: Link to official solution
|
13 |
-
- `problem_level`: Difficulty level (bronze/silver/gold/platinum)
|
14 |
-
- `description`: Full problem statement
|
15 |
-
- `input_format`: Input specification
|
16 |
-
- `output_format`: Output specification
|
17 |
-
- `samples`: Sample test cases included in problem description
|
18 |
-
- `runtime_limit`: Time limit in seconds
|
19 |
-
- `memory_limit`: Memory limit in MB
|
20 |
-
- `solution`: Official solution with explanation
|
21 |
-
|
22 |
-
### Source Data
|
23 |
-
|
24 |
-
Data was collected from the official USACO contest platform `https://usaco.org//index.php?page=open25results`, copyright not specified.
|
25 |
-
|
26 |
-
### Citation
|
27 |
-
|
28 |
-
If you use this dataset, please cite:
|
29 |
-
|
30 |
-
```bibtex
|
31 |
-
@misc{liu2025rstarcoderscalingcompetitivecode,
|
32 |
-
title={rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset},
|
33 |
-
author={Yifei Liu and Li Lyna Zhang and Yi Zhu and Bingcheng Dong and Xudong Zhou and Ning Shang and Fan Yang and Mao Yang},
|
34 |
-
year={2025},
|
35 |
-
eprint={2505.21297},
|
36 |
-
archivePrefix={arXiv},
|
37 |
-
primaryClass={cs.CL},
|
38 |
-
url={https://arxiv.org/abs/2505.21297},
|
39 |
-
}
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- text-generation
|
4 |
+
license: apache-2.0 # Assuming Apache 2.0 license based on Microsoft projects. Confirm if different.
|
5 |
+
tags:
|
6 |
+
- code
|
7 |
+
- reasoning
|
8 |
+
- large-language-model
|
9 |
+
- competitive-programming
|
10 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
+
# rStar-Coder: A Large-Scale Verified Dataset for Code Reasoning
|
13 |
+
|
14 |
+
This dataset, introduced in the paper [rStar-Coder: Scaling Competitive Code Reasoning with a Large-Scale Verified Dataset](https://huggingface.co/papers/2505.21297), significantly advances code reasoning in large language models (LLMs). rStar-Coder contains 418K competition-level code problems, 580K long-reasoning solutions, and rich test cases of varying difficulty. This dataset was constructed through a three-step process: curating competitive programming problems, creating a reliable input-output test case synthesis pipeline, and augmenting problems with high-quality, test-case-verified long-reasoning solutions.
|
15 |
+
|
16 |
+
The rStar-Coder dataset enables significant improvements in LLM code reasoning capabilities. Experiments on Qwen models demonstrate superior performance compared to models trained on other datasets, achieving leading results even with smaller model sizes.
|
17 |
+
|
18 |
+
**Key Features:**
|
19 |
+
|
20 |
+
* 418K competition-level code problems
|
21 |
+
* 580K long-reasoning solutions
|
22 |
+
* Rich test cases of varying difficulty
|
23 |
+
* Verified input-output test cases
|
24 |
+
|
25 |
+
**Dataset Usage:**
|
26 |
+
|
27 |
+
The dataset can be used to train and evaluate LLMs for code reasoning tasks. The detailed structure and usage instructions are available in the GitHub repository.
|
28 |
+
|
29 |
+
**Code and Dataset:**
|
30 |
+
|
31 |
+
[https://github.com/microsoft/rStar](https://github.com/microsoft/rStar)
|