File size: 2,497 Bytes
06f806b
89579a9
06f806b
8efa05a
 
 
d3a830f
 
1b2d0b0
8a610d8
dd3014e
1b2d0b0
dd3014e
 
 
 
1b2d0b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a610d8
1b2d0b0
 
 
 
 
 
 
 
1c58e48
a82a31c
f3760a7
1b2d0b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
06f806b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: cc-by-nc-4.0
---

This reward function can be used for RLHF, including PPO, iterative SFT, iterative DPO.

The license is derived from `PKU-Alignment/PKU-SafeRLHF-30K`.

## Training
The base model is `meta-llama/Meta-Llama-3-8B-Instruct`.

We use the training script at `https://github.com/WeiXiongUST/RLHF-Reward-Modeling`.


## Uses

```python
  from transformers import AutoTokenizer, pipeline
  rm_tokenizer = AutoTokenizer.from_pretrained("sfairXC/FsfairX-LLaMA3-RM-v0.1")
  device = 0 # accelerator.device
  rm_pipe = pipeline(
      "sentiment-analysis",
      model="sfairXC/FsfairX-LLaMA3-RM-v0.1",
      #device="auto",
      device=device,
      tokenizer=rm_tokenizer,
      model_kwargs={"torch_dtype": torch.bfloat16}
  )

  pipe_kwargs = {
      "return_all_scores": True,
      "function_to_apply": "none",
      "batch_size": 1
  }

  chat = [
   {"role": "user", "content": "Hello, how are you?"},
   {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
   {"role": "user", "content": "I'd like to show off how chat templating works!"},
  ]

  test_texts = [tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(tokenizer.bos_token, "")]
  pipe_outputs = rm_pipe(test_texts, **pipe_kwargs)
  rewards = [output[0]["score"] for output in pipe_outputs]
```


## Results


This Reward model is the SOTA open-source RM (Apr 20, 2024) on Reward-Bench.

| Metric       | Score  |
|--------------|--------|
| Chat         | 99.44  |
| Chat Hard    | 65.13  |
| Safety       | 88.76  |
| Reasoning    | 88.3   |



## References
The repo was part of the iterative rejection sampling fine-tuning and iterative DPO. If you find the content of this repo useful in your work, please consider cite it as follows:

```bibtex
@article{dong2023raft,
  title={Raft: Reward ranked finetuning for generative foundation model alignment},
  author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong},
  journal={arXiv preprint arXiv:2304.06767},
  year={2023}
}

@misc{xiong2024iterative,
      title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, 
      author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
      year={2024},
      eprint={2312.11456},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
```