Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,40 @@
|
|
1 |
-
---
|
2 |
-
license: llama3.1
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama3.1
|
3 |
+
datasets:
|
4 |
+
- KAKA22/CodeRM-UnitTest
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
base_model:
|
8 |
+
- meta-llama/Llama-3.1-8B-Instruct
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
tags:
|
11 |
+
- code
|
12 |
+
- llama
|
13 |
+
---
|
14 |
+
|
15 |
+
# Model Description
|
16 |
+
|
17 |
+
CodeRM-8B is a small yet powerful model designed to enable efficient and high-quality unit test generation.
|
18 |
+
It is trained based on Llama3.1-8B-Instruct on a dataset of 60k high-quality synthetic Python unit tests.
|
19 |
+
These unit tests are derived from two well-regarded code instruction tuning datasets:
|
20 |
+
[CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) and the
|
21 |
+
training set of [TACO](https://huggingface.co/datasets/BAAI/TACO).
|
22 |
+
The training dataset used for unit test generation is openly available under
|
23 |
+
[CodeRM-UnitTest](https://huggingface.co/datasets/KAKA22/CodeRM-UnitTest).
|
24 |
+
|
25 |
+
For further information and details of training, refer to our paper:
|
26 |
+
"Dynamic Scaling of Unit Tests for Code Reward Modeling" available on arXiv.
|
27 |
+
|
28 |
+
# Prompt Format
|
29 |
+
|
30 |
+
```
|
31 |
+
Below is a question and it's corresponding code answer. Please write test cases to check the correctness of the code answer. You need to use the unittest library in Python and create a test class for testing.
|
32 |
+
|
33 |
+
### question
|
34 |
+
{question}
|
35 |
+
|
36 |
+
### code solution
|
37 |
+
{code in function format}
|
38 |
+
|
39 |
+
Please add detailed comments to the test cases you write. You do not need to test the function's ability to throw exceptions.
|
40 |
+
```
|