Text Generation
Transformers
Safetensors
qwen2
conversational
text-generation-inference
Files changed (1) hide show
  1. README.md +139 -125
README.md CHANGED
@@ -1,125 +1,139 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- datasets:
5
- - DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K-tokenized
6
- base_model:
7
- - Qwen/Qwen2.5-7B-Instruct
8
- ---
9
-
10
- # LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
11
-
12
- This repo provides the checkpoint of Qwen2.5-7B-LongPO-128K in our paper "LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization".
13
-
14
-
15
-
16
-
17
- <h5 align="left">
18
-
19
- [![arXiv](https://img.shields.io/badge/Arxiv-2502.13922-AD1C18.svg?logo=arXiv)](http://arxiv.org/abs/2502.13922)
20
- [![hf_paper](https://img.shields.io/badge/🤗-HF%20Daily-red.svg)](https://huggingface.co/papers/2502.13922)
21
- </h5>
22
-
23
-
24
-
25
- ## Highlights of LongPO
26
-
27
- - Self-evolving long-context alignment without human/superior LLMs annotations.
28
- - Extending context length while keeping aligned in one stage.
29
- - No degradation on short-context capabilities.
30
-
31
-
32
- <img width="1031" alt="image" src="https://github.com/user-attachments/assets/84f3c93f-909d-4ef7-a33a-107ca2deec42" />
33
-
34
-
35
- ## Models and Training Data
36
-
37
- | Models | Base Model | Training Data | # Data Samples |
38
- | ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | -------------- |
39
- | [Mistral-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-128K) | Mistral-7B-Instruct-v0.2 | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-128K-tokenized) | 45K |
40
- | [Qwen2.5-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K) | Qwen2.5-7B-Instruct | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K-tokenized) | 32K |
41
- | [Mistral-7B-LongPO-256K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-256K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-256K-tokenized) | 16K |
42
- | [Mistral-7B-LongPO-512K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-512K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-512K-tokenized) | 2.5K |
43
-
44
- \* indicates an experimental version (for rebuttal purposes) that may have not been fully tuned or provided with sufficient data to achieve convergence.
45
-
46
-
47
-
48
-
49
-
50
-
51
- ## Evaluation
52
-
53
-
54
-
55
- ### InfiniteBench
56
-
57
-
58
- | Model | Train/Claimed Length | En.Sum | En.QA | En.MC | AVG. |
59
- | ---------------- | -------------------- | ------ | ------ | ------ | ------ |
60
- | GPT-4-128K | 128K | 14.73 | 22.44 | 67.25 | 34.81 |
61
- | Qwen2-72B | 128K | 24.32ᵇ | 7.03ᵇ | 72.05ᵇ | 34.47ᵇ |
62
- | LLaMA 3.1-70B | 128K | 33.55ᵇ | 36.08ᵇ | 69.00ᵇ | 46.21ᵇ |
63
- | LLaMA 3.1-8B | 128K | 28.06ᵇ | 30.47ᵇ | 58.08ᵇ | 38.87ᵇ |
64
- | GLM-4-9B | 128K | 14.84ᵇ | 9.51ᵇ | 67.25ᵇ | 30.53ᵇ |
65
- | GLM-4-9B-1M | 1M | 28.3 | 9.7 | 68.6 | 35.53 |
66
- | LWM-7B-1M | 1M | 4.33ᵇ | 0.0ᵇ | 3.06ᵇ | 2.46ᵇ |
67
- | YaRN-Mistral-7B | 128K | 9.09 | 9.55 | 27.95 | 15.53 |
68
- | Mistral-7B | 32K | 22.13 | 4.93 | 14.41 | 13.82 |
69
- | - SFT | 128K | 23.44 | 13.45 | 53.21 | 30.03 |
70
- | - DPO | 128K | 15.21 | 10.34 | 48.14 | 25.56 |
71
- | - LongPO (iter1) | 128K | 27.05 | 23.51 | 67.25 | 39.27 |
72
- | - LongPO (iter2) | 256K | 28.16 | 24.43 | 66.35 | 39.65 |
73
- | - LongPO (iter3) | 512K | 29.10 | 27.85 | 66.67 | 41.21 |
74
- | Qwen2.5-7B | 128K | 22.89 | 6.08 | 52.4 | 27.12 |
75
- | - LongPO (iter1) | 128K | 32.06 | 17.32 | 72.05 | 40.48 |
76
-
77
- - Our results are evaluated with greedy decoding.
78
- - Baseline results marked withare evaluated by us, while unmarked baseline results are sourced from their official report.
79
-
80
-
81
-
82
-
83
-
84
- ### RULER
85
-
86
- | Model | NIAH | VT | AGG | QA | AVG (13 tasks) |
87
- | ------------------------ | ----- | ----- | ----- | ----- | -------------- |
88
- | Qwen2.5-7B-Instruct | 82.10 | 80.09 | 74.50 | 54.30 | 76.50 |
89
- | Qwen2.5-7B-LongPO-128K | 95.82 | 89.71 | 78.67 | 59.40 | 87.11 |
90
- | Mistral-7B-Instruct-v0.2 | 72.60 | 74.40 | 64.40 | 52.20 | 68.40 |
91
- | Mistral-7B-LongPO-128K | 96.88 | 96.49 | 71.55 | 64.81 | 88.02 |
92
- | Mistral-7B-LongPO-256K-EXP | 96.80 | 97.00 | 69.14 | 64.87 | 87.65 |
93
- | Mistral-7B-LongPO-512K-EXP | 97.28 | 97.48 | 69.22 | 64.92 | 88.00 |
94
-
95
-
96
-
97
-
98
-
99
- ### Short Context
100
-
101
- | Model | MMLU | ARC-C | Hellaswag | Winogrande | Avg |
102
- |-------|-------|--------|------------|-------------|-----|
103
- | Mistral-7B-Instruct-v0.2 | 59.15 | 59.26 | 83.2 | 78.4 | 70.00 |
104
- | Mistral-7B-LongPO-128K | 59.99 | 59.34 | 82.99 | 78.53 | 70.21 |
105
- | Mistral-7B-LongPO-256K-EXP | 59.47 | 60.28 | 83.14 | 78.14 | 70.26 |
106
- | Mistral-7B-LongPO-512K-EXP | 59.51 | 60.58 | 82.87 | 77.66 | 70.16 |
107
- | Qwen2.5-7B-Instruct | 74.28 | 67.15 | 81.41 | 74.66 | 74.38 |
108
- | Qwen2.5-7B-LongPO-128K | 73.64 | 65.70 | 80.82 | 74.98 | 73.79 |
109
-
110
-
111
-
112
- ## Citation
113
- If you find our project useful, hope you can star our repo and cite our paper as follows:
114
- ```
115
- @inproceedings{
116
- chen2025longpo,
117
- title={Long{PO}: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization},
118
- author={Guanzheng Chen and Xin Li and Michael Shieh and Lidong Bing},
119
- booktitle={The Thirteenth International Conference on Learning Representations},
120
- year={2025},
121
- url={https://openreview.net/forum?id=qTrEq31Shm}
122
- }
123
- ```
124
-
125
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K-tokenized
6
+ base_model:
7
+ - Qwen/Qwen2.5-7B-Instruct
8
+ language:
9
+ - zho
10
+ - eng
11
+ - fra
12
+ - spa
13
+ - por
14
+ - deu
15
+ - ita
16
+ - rus
17
+ - jpn
18
+ - kor
19
+ - vie
20
+ - tha
21
+ - ara
22
+ ---
23
+
24
+ # LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization
25
+
26
+ This repo provides the checkpoint of Qwen2.5-7B-LongPO-128K in our paper "LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization".
27
+
28
+
29
+
30
+
31
+ <h5 align="left">
32
+
33
+ [![arXiv](https://img.shields.io/badge/Arxiv-2502.13922-AD1C18.svg?logo=arXiv)](http://arxiv.org/abs/2502.13922)
34
+ [![hf_paper](https://img.shields.io/badge/🤗-HF%20Daily-red.svg)](https://huggingface.co/papers/2502.13922)
35
+ </h5>
36
+
37
+
38
+
39
+ ## Highlights of LongPO
40
+
41
+ - Self-evolving long-context alignment without human/superior LLMs annotations.
42
+ - Extending context length while keeping aligned in one stage.
43
+ - No degradation on short-context capabilities.
44
+
45
+
46
+ <img width="1031" alt="image" src="https://github.com/user-attachments/assets/84f3c93f-909d-4ef7-a33a-107ca2deec42" />
47
+
48
+
49
+ ## Models and Training Data
50
+
51
+ | Models | Base Model | Training Data | # Data Samples |
52
+ | ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | -------------- |
53
+ | [Mistral-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-128K) | Mistral-7B-Instruct-v0.2 | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-128K-tokenized) | 45K |
54
+ | [Qwen2.5-7B-LongPO-128K](https://huggingface.co/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K) | Qwen2.5-7B-Instruct | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Qwen2.5-7B-LongPO-128K-tokenized) | 32K |
55
+ | [Mistral-7B-LongPO-256K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-256K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-256K-tokenized) | 16K |
56
+ | [Mistral-7B-LongPO-512K-EXP](https://huggingface.co/DAMO-NLP-SG/Mistral-7B-LongPO-512K-EXP)* | Mistral-7B-LongPO-128K | [HF Link](https://huggingface.co/datasets/DAMO-NLP-SG/Mistral-7B-LongPO-512K-tokenized) | 2.5K |
57
+
58
+ \* indicates an experimental version (for rebuttal purposes) that may have not been fully tuned or provided with sufficient data to achieve convergence.
59
+
60
+
61
+
62
+
63
+
64
+
65
+ ## Evaluation
66
+
67
+
68
+
69
+ ### InfiniteBench
70
+
71
+
72
+ | Model | Train/Claimed Length | En.Sum | En.QA | En.MC | AVG. |
73
+ | ---------------- | -------------------- | ------ | ------ | ------ | ------ |
74
+ | GPT-4-128K | 128K | 14.73 | 22.44 | 67.25 | 34.81 |
75
+ | Qwen2-72B | 128K | 24.32| 7.03ᵇ | 72.05| 34.47ᵇ |
76
+ | LLaMA 3.1-70B | 128K | 33.55ᵇ | 36.08ᵇ | 69.00ᵇ | 46.21ᵇ |
77
+ | LLaMA 3.1-8B | 128K | 28.06ᵇ | 30.47ᵇ | 58.08ᵇ | 38.87ᵇ |
78
+ | GLM-4-9B | 128K | 14.84ᵇ | 9.51ᵇ | 67.25| 30.53ᵇ |
79
+ | GLM-4-9B-1M | 1M | 28.3 | 9.7 | 68.6 | 35.53 |
80
+ | LWM-7B-1M | 1M | 4.33ᵇ | 0.0ᵇ | 3.06ᵇ | 2.46ᵇ |
81
+ | YaRN-Mistral-7B | 128K | 9.09 | 9.55 | 27.95 | 15.53 |
82
+ | Mistral-7B | 32K | 22.13 | 4.93 | 14.41 | 13.82 |
83
+ | - SFT | 128K | 23.44 | 13.45 | 53.21 | 30.03 |
84
+ | - DPO | 128K | 15.21 | 10.34 | 48.14 | 25.56 |
85
+ | - LongPO (iter1) | 128K | 27.05 | 23.51 | 67.25 | 39.27 |
86
+ | - LongPO (iter2) | 256K | 28.16 | 24.43 | 66.35 | 39.65 |
87
+ | - LongPO (iter3) | 512K | 29.10 | 27.85 | 66.67 | 41.21 |
88
+ | Qwen2.5-7B | 128K | 22.89 | 6.08 | 52.4 | 27.12 |
89
+ | - LongPO (iter1) | 128K | 32.06 | 17.32 | 72.05 | 40.48 |
90
+
91
+ - Our results are evaluated with greedy decoding.
92
+ - Baseline results marked with are evaluated by us, while unmarked baseline results are sourced from their official report.
93
+
94
+
95
+
96
+
97
+
98
+ ### RULER
99
+
100
+ | Model | NIAH | VT | AGG | QA | AVG (13 tasks) |
101
+ | ------------------------ | ----- | ----- | ----- | ----- | -------------- |
102
+ | Qwen2.5-7B-Instruct | 82.10 | 80.09 | 74.50 | 54.30 | 76.50 |
103
+ | Qwen2.5-7B-LongPO-128K | 95.82 | 89.71 | 78.67 | 59.40 | 87.11 |
104
+ | Mistral-7B-Instruct-v0.2 | 72.60 | 74.40 | 64.40 | 52.20 | 68.40 |
105
+ | Mistral-7B-LongPO-128K | 96.88 | 96.49 | 71.55 | 64.81 | 88.02 |
106
+ | Mistral-7B-LongPO-256K-EXP | 96.80 | 97.00 | 69.14 | 64.87 | 87.65 |
107
+ | Mistral-7B-LongPO-512K-EXP | 97.28 | 97.48 | 69.22 | 64.92 | 88.00 |
108
+
109
+
110
+
111
+
112
+
113
+ ### Short Context
114
+
115
+ | Model | MMLU | ARC-C | Hellaswag | Winogrande | Avg |
116
+ |-------|-------|--------|------------|-------------|-----|
117
+ | Mistral-7B-Instruct-v0.2 | 59.15 | 59.26 | 83.2 | 78.4 | 70.00 |
118
+ | Mistral-7B-LongPO-128K | 59.99 | 59.34 | 82.99 | 78.53 | 70.21 |
119
+ | Mistral-7B-LongPO-256K-EXP | 59.47 | 60.28 | 83.14 | 78.14 | 70.26 |
120
+ | Mistral-7B-LongPO-512K-EXP | 59.51 | 60.58 | 82.87 | 77.66 | 70.16 |
121
+ | Qwen2.5-7B-Instruct | 74.28 | 67.15 | 81.41 | 74.66 | 74.38 |
122
+ | Qwen2.5-7B-LongPO-128K | 73.64 | 65.70 | 80.82 | 74.98 | 73.79 |
123
+
124
+
125
+
126
+ ## Citation
127
+ If you find our project useful, hope you can star our repo and cite our paper as follows:
128
+ ```
129
+ @inproceedings{
130
+ chen2025longpo,
131
+ title={Long{PO}: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization},
132
+ author={Guanzheng Chen and Xin Li and Michael Shieh and Lidong Bing},
133
+ booktitle={The Thirteenth International Conference on Learning Representations},
134
+ year={2025},
135
+ url={https://openreview.net/forum?id=qTrEq31Shm}
136
+ }
137
+ ```
138
+
139
+