davidanugraha commited on
Commit
37b1e5f
·
verified ·
1 Parent(s): 1fc44f9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ datasets:
6
+ - rubricreward/R3-Dataset-14K
7
+ base_model:
8
+ - microsoft/Phi-4-reasoning-plus
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
+ tags:
12
+ - lora
13
+ ---
14
+
15
+ <img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px">
16
+
17
+ # R3-Phi-4-reasoning-plus-14k
18
+
19
+ R3-Phi-4-reasoning-plus-14k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models.
20
+ We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus.
21
+ Check out [our paper](https://arxiv.org/abs/2505.13388) for more information!
22
+
23
+
24
+ ## Model description
25
+
26
+ - **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers
27
+ tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s),
28
+ evaluation rubrics, and a score along with the corresponding reasoning.
29
+ - **Language(s) (NLP):** English
30
+ - **License:** Apache 2.0
31
+ - **Finetuned from model:** Qwen/Qwen3-14B
32
+
33
+ ### Model Sources
34
+
35
+ - **Project Page:** https://rubricreward.github.io
36
+ - **Repository:** https://github.com/rubricreward/r3
37
+ - **Paper:** https://arxiv.org/abs/2505.13388
38
+
39
+ ## Using the Model
40
+
41
+
42
+ ```python
43
+ from transformers import AutoTokenizer
44
+ from vllm import LLM, SamplingParams
45
+
46
+ model_path = "rubricreward/R3-Phi-4-reasoning-plus-14k"
47
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
48
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20)
49
+
50
+ llm = LLM(
51
+ model=model_path,
52
+ dtype="bfloat16",
53
+ max_model_len=10000,
54
+ tensor_parallel_size=2,
55
+ gpu_memory_utilization=0.9,
56
+ enforce_eager=True,
57
+ )
58
+
59
+ messages: list[dict[str, str]] = [
60
+ {'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'}
61
+ ]
62
+
63
+ list_text = tokenizer.apply_chat_template(
64
+ messages,
65
+ tokenize=False,
66
+ add_generation_prompt=True,
67
+ enable_thinking=True # Switch between thinking and non-thinking modes.
68
+ )
69
+
70
+ outputs = llm.generate(list_text, sampling_params)
71
+ ```
72
+
73
+ ## License and use
74
+
75
+ R3 is licensed under the Apache 2.0 license.
76
+
77
+ ## Citation
78
+
79
+ ```bibtex
80
+ @article{anugraha2025r3,
81
+ title={R3: Robust Rubric-Agnostic Reward Models},
82
+ author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra},
83
+ journal={arXiv preprint arXiv:2505.13388},
84
+ year={2025}
85
+ }
86
+ ```