|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- rubricreward/R3-Dataset-4K |
|
base_model: |
|
- Qwen/Qwen3-4B |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- lora |
|
--- |
|
|
|
<img alt="R3 Logo" src="https://cdn-avatars.huggingface.co/v1/production/uploads/651803f834c26962535eb022/hj3UEN9_9wlkmvMfUY1OL.png" width="150px"> |
|
|
|
# R3-Qwen3-4B-LoRA-4k |
|
|
|
R3-Qwen3-4B-LoRA-4k is part of the R3 family, a series of **R**obust **R**ubric-Agnostic **R**eward Models. |
|
We perform SFT on the Qwen3 model family on the 4B, 8B, and 14B scales as well as on Phi-4-reasoning plus. |
|
Check out [our paper](https://arxiv.org/abs/2505.13388) for more information! |
|
|
|
|
|
## Model description |
|
|
|
- **Model type:** A reward model trained on a curated R3 dataset collected from 45 diverse sources that covers |
|
tasks such as classification, preference optimization, and question answering. Each example in the dataset contains an instruction and task description, input, response(s), |
|
evaluation rubrics, and a score along with the corresponding reasoning. |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache 2.0 |
|
- **Finetuned from model:** Qwen/Qwen3-4B |
|
|
|
### Model Sources |
|
|
|
- **Project Page:** https://rubricreward.github.io |
|
- **Repository:** https://github.com/rubricreward/r3 |
|
- **Paper:** https://arxiv.org/abs/2505.13388 |
|
|
|
## License and use |
|
|
|
R3 is licensed under the Apache 2.0 license. |
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{anugraha2025r3, |
|
title={R3: Robust Rubric-Agnostic Reward Models}, |
|
author={Anugraha, David and Tang, Zilu and Miranda, Lester James V. and Zhao, Hanyang and Farhansyah, Mohammad Rifqi and Kuwanto, Garry and Wijaya, Derry and Winata, Genta Indra}, |
|
journal={arXiv preprint arXiv:2505.13388}, |
|
year={2025} |
|
} |
|
``` |