yuchenFan commited on
Commit
452f3d6
·
1 Parent(s): b099578

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -10
README.md CHANGED
@@ -13,35 +13,59 @@ license: apache-2.0
13
 
14
  EurusPRM-Stage2 is trained using **[Implicit PRM](https://arxiv.org/abs/2412.01981)**, which obtains free process rewards at no additional cost but just needs to simply train an ORM on the cheaper response-level labels. During inference, implicit process rewards are obtained by forward passing and calculating the log-likelihood ratio on each step.
15
 
16
- <img src="./figures/implicit.png" alt="prm" style="zoom: 33%;" />
17
 
18
  The key ingredient of Implicit PRM is the reward representation, as demonstrated below:
19
 
20
  <aside>
21
 
22
 
23
- ***Proposition***: Consider an ORM where the reward is parameterized by the log-likelihood ratio of two causal LMs, i.e. $r_\phi(\mathbf{y}):= \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})}$. Define $q_\phi^t(\mathbf{y}_{<t}, y_t):= \sum_{i=1}^{t} \beta \log \frac{\pi_\phi(y_{i}|\mathbf{y}_{<i})}{\pi_\text{ref}(y_{i}|\mathbf{y}_{<i})}$. $q_\theta^t$ is the exponential average of $r_\theta$ at step $t$.*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  $$
26
- q_\phi^t(\mathbf{y}_{<t}, y_t) = \beta \log \mathbb{E}_{\pi_\text{ref}(\mathbf{y}|\mathbf{y}_{\leq t})} e^{\frac{1}{\beta}r_\phi(\mathbf{y})}
27
  $$
28
 
29
- *Hence, **$q_\theta^t$ represents an exact expectation of outcome reward $r_\theta$ at step $t$, i.e., the Q value.***
 
 
30
 
31
- </aside>
 
 
32
 
33
- The proposition indicates that when modeling $r_\phi(\mathbf{y}):= \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})}$ to train an ORM with the standard pipeline, where $\beta$ is a hyperparameter, $\phi$ can implicitly learn a Q function. Hence, process reward $r_\phi^t$ can be obtained by:
34
 
35
  $$
36
- r_\phi^t := q_\phi^t - q_\phi^{t-1} = \beta \log \frac{\pi_\phi(y_{t}|\mathbf{y}_{<t})}{\pi_\text{ref}(y_{t}|\mathbf{y}_{<t})}
37
  $$
38
 
39
- Therefore, **we can indeed obtain PRMs simply by collecting response-level data and training an ORM, without any burden of annotating step labels.**
 
 
 
 
 
 
40
 
41
- The proposition is **agnostic to specific choices of the training objective of ORMs**. It can be instantiated with different objectives as vanilla ORM training, with the only difference being substituting the $r_\phi \left( \mathbf{y} \right)$ with $\beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})}$. For example, [DPO](https://arxiv.org/abs/2305.18290) already meets our assumption and serves as a strong variant, while in this work, we instantiate our implicit PRM with cross entropy (CE) loss due to memory efficiency:
42
 
43
  $$
44
- \mathcal{L}_{CE} = l \cdot \log \sigma \left( \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})} \right) + (1-l) \cdot \log\left[ 1 - \sigma \left( \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})} \right) \right]
45
  $$
46
 
47
  We started the second-stage training on top of [EurusPRM-Stage1](https://huggingface.co/PRIME-RL/EurusPRM-Stage1) with fine-grained step-level labels. To obtain step-level labels, we employed Llama-3.1-70B-Inst and Qwen2.5-72B-Inst to insert nuance errors into correct solutions. We also mixed response-level data in this stage. The model was continually trained with $L_{CE}$ with a learning rate of 5e-7 and a batch-size of 64.
 
13
 
14
  EurusPRM-Stage2 is trained using **[Implicit PRM](https://arxiv.org/abs/2412.01981)**, which obtains free process rewards at no additional cost but just needs to simply train an ORM on the cheaper response-level labels. During inference, implicit process rewards are obtained by forward passing and calculating the log-likelihood ratio on each step.
15
 
16
+ <img src="./figs/implicit.png" alt="prm" style="zoom: 33%;" />
17
 
18
  The key ingredient of Implicit PRM is the reward representation, as demonstrated below:
19
 
20
  <aside>
21
 
22
 
23
+ ***Proposition***
24
+
25
+ Consider an ORM where the reward is parameterized by the log-likelihood ratio of two causal LMs, i.e.,
26
+
27
+ $$
28
+ r_\phi(\mathbf{y}) := \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})}.
29
+ $$
30
+
31
+ Define
32
+
33
+ $$
34
+ q_\phi^t(\mathbf{y}{<t}, y_t) := \sum{i=1}^{t} \beta \log \frac{\pi_\phi(y_{i}|\mathbf{y}{<i})}{\pi\text{ref}(y_{i}|\mathbf{y}_{<i})}.
35
+ $$
36
+
37
+ Here,  is the exponential average of  at step .
38
 
39
  $$
40
+ q_\phi^t(\mathbf{y}{<t}, y_t) = \beta \log \mathbb{E}{\pi_\text{ref}(\mathbf{y}|\mathbf{y}{\leq t})} \left[ e^{\frac{1}{\beta} r\phi(\mathbf{y})} \right]
41
  $$
42
 
43
+ Hence, represents an exact expectation of outcome reward at step , i.e., the Q value.
44
+
45
+ The proposition indicates that when modeling
46
 
47
+ $$
48
+ r_\phi(\mathbf{y}) := \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})}
49
+ $$
50
 
51
+ to train an ORM with the standard pipeline, where is a hyperparameter, can implicitly learn a Q function. Hence, process reward can be obtained by:
52
 
53
  $$
54
+ r_\phi^t := q_\phi^t - q_\phi^{t-1} = \beta \log \frac{\pi_\phi(y_{t}|\mathbf{y}{<t})}{\pi\text{ref}(y_{t}|\mathbf{y}_{<t})}.
55
  $$
56
 
57
+ Therefore, we can indeed obtain PRMs simply by collecting response-level data and training an ORM, without any burden of annotating step labels.
58
+
59
+ The proposition is agnostic to specific choices of the training objective of ORMs. It can be instantiated with different objectives as vanilla ORM training, with the only difference being substituting the  with
60
+
61
+ $$
62
+ \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})}.
63
+ $$
64
 
65
+ For example, DPO already meets our assumption and serves as a strong variant, while in this work, we instantiate our implicit PRM with cross entropy (CE) loss due to memory efficiency:
66
 
67
  $$
68
+ \mathcal{L}{CE} = l \cdot \log \sigma \left( \beta \log \frac{\pi\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})} \right) + (1 - l) \cdot \log \left[ 1 - \sigma \left( \beta \log \frac{\pi_\phi(\mathbf{y})}{\pi_\text{ref}(\mathbf{y})} \right) \right]
69
  $$
70
 
71
  We started the second-stage training on top of [EurusPRM-Stage1](https://huggingface.co/PRIME-RL/EurusPRM-Stage1) with fine-grained step-level labels. To obtain step-level labels, we employed Llama-3.1-70B-Inst and Qwen2.5-72B-Inst to insert nuance errors into correct solutions. We also mixed response-level data in this stage. The model was continually trained with $L_{CE}$ with a learning rate of 5e-7 and a batch-size of 64.