czczup commited on
Commit
e24fb8d
1 Parent(s): 905c20d

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +10 -10
  2. merges.txt +0 -1
README.md CHANGED
@@ -65,8 +65,8 @@ To construct this dataset, we propose an efficient data construction pipeline. S
65
 
66
  - **For samples with clear ground truths:**
67
  the model is prompted to first provide the reasoning process and then give the final answer in the format like `Final Answer: ***`.
68
- Responses matching the ground truth answer constitute the positive set $\mathcal{Y}_p$, while those that do not match make up the negative set $\mathcal{Y}_n$. Additionally, responses that fail to provide a clear final answer are also merged into $\mathcal{Y}_n$.
69
- Given these responses labeled as positive or negative, we build the preference pairs by selecting a chosen response $y_c$ from $\mathcal{Y}_p$ and a negative response $y_r$ from $\mathcal{Y}_n$.
70
 
71
  - **For samples without clear ground truths:**
72
  we propose a simple yet effective method: Dropout Next-Token Prediction (Dropout NTP).
@@ -85,16 +85,16 @@ The data construction pipeline is open-sourced, see more details in our [documen
85
  ### Mixed Preference Optimization
86
 
87
  The key insight behind MPO is that *an effective PO process should enable the model to learn the relative preference between pairs of responses, the absolute quality of individual responses, and the process for generating preferred responses.* We define the training objective as a combination of
88
- preference loss $\mathcal{L}_{\text{p}}$,
89
- quality loss $\mathcal{L}_{\text{q}}$,
90
- and generation loss $\mathcal{L}_{\text{g}}$,
91
  referred to as Mixed Preference Optimization:
92
 
93
  $$
94
  \mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
95
  $$
96
 
97
- where $w_{*}$ represents the weight assigned to each loss component.
98
  In this work, we empirically compare different variants of preference loss.
99
  Based on the experimental results, we use DPO as our preference loss and BCO as our quality loss.
100
 
@@ -106,8 +106,8 @@ $$
106
  \mathcal{L}_{\text{p}}=-\log \sigma\left(\beta \log \frac{\pi_\theta\left(y_c \mid x\right)}{\pi_0\left(y_c \mid x\right)}-\beta \log \frac{\pi_\theta\left(y_r \mid x\right)}{\pi_0\left(y_r \mid x\right)}\right),
107
  $$
108
 
109
- where $\beta$ is the KL penalty coefficient, and $x$, $y_c$, and $y_r$ are user query, chosen response, and rejected response, respectively.
110
- The policy model $\pi_\theta$ is initialized from model $\pi_0$.
111
 
112
  Additionally, the BCO loss is employed as the quality loss, which helps the model to understand the absolute quality of individual responses.
113
  The loss function is defined as:
@@ -116,7 +116,7 @@ $$
116
  \mathcal{L}_{\text{q}}=\mathcal{L}_{\text{q}}^+ + \mathcal{L}_{\text{q}}^-,
117
  $$
118
 
119
- where $\mathcal{L}_{\text{q}}^{+}$ and $\mathcal{L}_{\text{q}}^{+}$ represent the loss for chosen and rejected responses, respectively.
120
  Each response type's loss is calculated independently, requiring the model to differentiate the absolute quality of individual responses. The loss terms are given by:
121
 
122
  $$
@@ -127,7 +127,7 @@ $$
127
  \mathcal{L}_{\text{q}}^-=-\log \sigma\left(-\left(\beta \log \frac{\pi_\theta\left(y_r \mid x\right)}{\pi_0\left(y_r \mid x\right)} - \delta\right) \right),
128
  $$
129
 
130
- where $\delta$ represents the reward shift, calculated as the moving average of previous rewards to stabilize training.
131
 
132
  Finally, the SFT loss is used as the generation loss to help the model learn the generation process of preferred responses.
133
  The loss function is defined as:
 
65
 
66
  - **For samples with clear ground truths:**
67
  the model is prompted to first provide the reasoning process and then give the final answer in the format like `Final Answer: ***`.
68
+ Responses matching the ground truth answer constitute the positive set \\(mathcal{Y}_p\\), while those that do not match make up the negative set \\(\mathcal{Y}_n\\). Additionally, responses that fail to provide a clear final answer are also merged into \\(\mathcal{Y}_n\\).
69
+ Given these responses labeled as positive or negative, we build the preference pairs by selecting a chosen response \\(y_c\\) from \\(\mathcal{Y}_p\\) and a negative response \\(y_r\\) from \\(\mathcal{Y}_n\\).
70
 
71
  - **For samples without clear ground truths:**
72
  we propose a simple yet effective method: Dropout Next-Token Prediction (Dropout NTP).
 
85
  ### Mixed Preference Optimization
86
 
87
  The key insight behind MPO is that *an effective PO process should enable the model to learn the relative preference between pairs of responses, the absolute quality of individual responses, and the process for generating preferred responses.* We define the training objective as a combination of
88
+ preference loss \\(\mathcal{L}_{\text{p}}\\),
89
+ quality loss \\(\mathcal{L}_{\text{q}}\\),
90
+ and generation loss \\(\mathcal{L}_{\text{g}}\\),
91
  referred to as Mixed Preference Optimization:
92
 
93
  $$
94
  \mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
95
  $$
96
 
97
+ where \\(w_{*}\\) represents the weight assigned to each loss component.
98
  In this work, we empirically compare different variants of preference loss.
99
  Based on the experimental results, we use DPO as our preference loss and BCO as our quality loss.
100
 
 
106
  \mathcal{L}_{\text{p}}=-\log \sigma\left(\beta \log \frac{\pi_\theta\left(y_c \mid x\right)}{\pi_0\left(y_c \mid x\right)}-\beta \log \frac{\pi_\theta\left(y_r \mid x\right)}{\pi_0\left(y_r \mid x\right)}\right),
107
  $$
108
 
109
+ where \\(\beta\\) is the KL penalty coefficient, and \\(x\\), \\(y_c\\), and \\(y_r\\) are user query, chosen response, and rejected response, respectively.
110
+ The policy model \\(\pi_\theta\\) is initialized from model \\(\pi_0\\).
111
 
112
  Additionally, the BCO loss is employed as the quality loss, which helps the model to understand the absolute quality of individual responses.
113
  The loss function is defined as:
 
116
  \mathcal{L}_{\text{q}}=\mathcal{L}_{\text{q}}^+ + \mathcal{L}_{\text{q}}^-,
117
  $$
118
 
119
+ where \\(\mathcal{L}_{\text{q}}^{+}\\) and \\(\mathcal{L}_{\text{q}}^{+}\\) represent the loss for chosen and rejected responses, respectively.
120
  Each response type's loss is calculated independently, requiring the model to differentiate the absolute quality of individual responses. The loss terms are given by:
121
 
122
  $$
 
127
  \mathcal{L}_{\text{q}}^-=-\log \sigma\left(-\left(\beta \log \frac{\pi_\theta\left(y_r \mid x\right)}{\pi_0\left(y_r \mid x\right)} - \delta\right) \right),
128
  $$
129
 
130
+ where \\(\delta\\) represents the reward shift, calculated as the moving average of previous rewards to stabilize training.
131
 
132
  Finally, the SFT loss is used as the generation loss to help the model learn the generation process of preferred responses.
133
  The loss function is defined as:
merges.txt CHANGED
@@ -1,4 +1,3 @@
1
- #version: 0.2
2
  Ġ Ġ
3
  ĠĠ ĠĠ
4
  i n
 
 
1
  Ġ Ġ
2
  ĠĠ ĠĠ
3
  i n