|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- rubricreward/R3-Dataset-4K |
|
base_model: |
|
- Qwen/Qwen3-8B |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- lora |
|
--- |
|
|
|
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px"> |
|
|
|
# R3-Qwen3-8B-LoRA-4k |
|
|
|
R3-Qwen3-8B-LoRA-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models. |
|
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus. |
|
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information! |
|
|
|
|
|
## Model description |
|
|
|
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers |
|
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s), |
|
evaluation rubrics, and a score along with the corresponding reasoning. |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Finetuned from model:** Qwen/Qwen3-8B |
|
|
|
### Model Sources |
|
|
|
- **Project Page:** https://rubricreward.github.io |
|
- **Repository:** https://github.com/rubricreward/r3 |
|
- **Paper:** https://arxiv.org/abs/2505.13388 |
|
|
|
## Using the Model |
|
|
|
|
|
```python |
|
from transformers import AutoTokenizer |
|
from vllm import LLM, SamplingParams |
|
|
|
model_path = "rubricreward/R3-Qwen3-8B-LoRA-4k" |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, max_tokens=8192, min_p=0, top_k=20) |
|
|
|
llm = LLM( |
|
model=model_path, |
|
dtype="bfloat16", |
|
max_model_len=10000, |
|
tensor_parallel_size=2, |
|
gpu_memory_utilization=0.9, |
|
enforce_eager=True, |
|
) |
|
|
|
messages: list[dict[str, str]] = [ |
|
{'content': "Evaluate the response based on the given task, input, response, and evaluation rubric. Provide a fair and detailed assessment following the rubric...", 'role': 'user'} |
|
] |
|
|
|
list_text = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=False, |
|
add_generation_prompt=True, |
|
enable_thinking=True # Switch between thinking and non-thinking modes. |
|
) |
|
|
|
outputs = llm.generate(list_text, sampling_params) |
|
``` |
|
|
|
## License and use |
|
|
|
R3 is licensed under the Apache 2.0 license. |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{anugraha2025r3, |
|
title={R3: Robust Rubric-Agnostic Reward Models}, |
|
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra}, |
|
journal={arXiv preprint arXiv:2505.13388}, |
|
year={2025} |
|
} |
|
``` |