Wanfq commited on
Commit
485a1fd
·
verified ·
1 Parent(s): a9d864e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +298 -3
README.md CHANGED
@@ -1,3 +1,298 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Tongyi-Zhiwen/DocQA-RL-1.6K
5
+ base_model:
6
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
7
+ tags:
8
+ - long-context
9
+ - large-reasoning-model
10
+ ---
11
+
12
+ <p align="center" width="100%">
13
+ </p>
14
+
15
+ <div id="top" align="center">
16
+
17
+ QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning
18
+ -----------------------------
19
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
20
+ [![arXiv](https://img.shields.io/badge/arXiv-xxxx.xxxxx-b31b1b.svg)](https://arxiv.org/abs/xxxx.xxxxx)
21
+ [![GitHub](https://img.shields.io/badge/GitHub-TongyiZhiwen-4b32c3?logo=github)](https://github.com/Tongyi-Zhiwen)
22
+ [![ModelScope](https://img.shields.io/badge/🤖%20ModelScope-purple)](https://modelscope.cn/organization/iic/)
23
+ [![HuggingFace](https://img.shields.io/badge/🤗%20HuggingFace-yellow)](https://huggingface.co/Tongyi-Zhiwen)
24
+
25
+ <!-- **Authors:** -->
26
+
27
+ _**Fanqi Wan, Weizhou Shen, Shengyi Liao, Yingcheng Shi, Chenliang Li, Ziyi Yang, Ji Zhang, Fei Huang, Jingren Zhou, Ming Yan**_
28
+
29
+
30
+ <!-- **Affiliations:** -->
31
+
32
+
33
+ _Qwen-Doc Team, Alibaba Group_
34
+
35
+ <p align="center">
36
+ <img src="./assets/fig1.png" width="100%"> <br>
37
+ </p>
38
+
39
+
40
+ </div>
41
+
42
+ ## 🎉 News
43
+
44
+ - **May 26, 2025:** 🔥 We release [🤗 QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B), which is the first long-context LRM trained with reinforcement learniing for long-context reasoning. Experiments on seven long-context DocQA benchmarks demonstrate that **QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking**, demonstrating leading performance among state-of-the-art LRMs.
45
+
46
+ - **May 26, 2025:** 🔥 We release [🤗 DocQA-RL-1.6K](https://huggingface.co/datasets/Tongyi-Zhiwen/DocQA-RL-1.6K), which is a specialized RL training dataset comprising 1.6K document question answering (DocQA) problems spanning mathematical, logical, and multi-hop reasoning domains.
47
+
48
+
49
+ ## 📚 Introduction
50
+
51
+ In this work, we propose QwenLong-L1, a novel reinforcement learning (RL) framework designed to facilitate the transition of LRMs from short-context proficiency to robust long-context generalization. In our preliminary experiments, we illustrate the differences between the training dynamics of short-context and long-context reasoning RL.
52
+
53
+ <p align="center">
54
+ <img src="./assets/fig2.png" width="100%"> <br>
55
+ </p>
56
+
57
+ Our framework enhances short-context LRMs through progressive context scaling during RL training. The framework comprises three core components: a warm-up supervised fine-tuning (SFT) phase to initialize a robust policy, a curriculum-guided RL phase that facilitates stable adaptation from short to long contexts, and a difficulty-aware retrospective sampling mechanism that adjusts training complexity across stages to incentivize policy exploration. Leveraging recent RL algorithms, including GRPO and DAPO, our framework integrates hybrid reward functions combining rule-based and model-based binary outcome rewards to balance precision and recall. Through strategic utilization of group relative advantages during policy optimization, it guides LRMs to learn effective reasoning patterns essential for robust long-context grounding and superior reasoning capabilities.
58
+
59
+ <p align="center">
60
+ <img src="./assets/fig3.png" width="100%"> <br>
61
+ </p>
62
+
63
+
64
+ ## 🎯 Model Release
65
+
66
+ We release [🤗 QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B), which is the first long-context LRM trained with reinforcement learniing for long-context reasoning. Experiments on seven long-context DocQA benchmarks demonstrate that **QwenLong-L1-32B outperforms flagship LRMs like OpenAI-o3-mini and Qwen3-235B-A22B, achieving performance on par with Claude-3.7-Sonnet-Thinking**, demonstrating leading performance among state-of-the-art LRMs.
67
+
68
+ Here are the evaluation results.
69
+
70
+ <p align="center">
71
+ <img src="./assets/tab4.png" width="100%"> <br>
72
+ </p>
73
+
74
+ ## 🛠️ Requirements
75
+
76
+ ```bash
77
+ # Create the conda environment
78
+ conda create -n qwenlongl1 python==3.10
79
+ conda activate qwenlongl1
80
+
81
+ # Install requirements
82
+ pip3 install -r requirements.txt
83
+
84
+ # Install verl
85
+ cd verl
86
+ pip3 install -e .
87
+
88
+ # Install vLLM
89
+ pip3 install vllm==0.7.3
90
+
91
+ # Install flash-attn
92
+ pip3 install flash-attn --no-build-isolation
93
+ ```
94
+
95
+ ## 🚀 Quick Start
96
+
97
+ Here's how you can run the model using the 🤗 Transformers:
98
+
99
+ ```python
100
+ from transformers import AutoModelForCausalLM, AutoTokenizer
101
+
102
+ model_name = "Tongyi-Zhiwen/QwenLong-L1-32B"
103
+
104
+ # load the tokenizer and the model
105
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
106
+ model = AutoModelForCausalLM.from_pretrained(
107
+ model_name,
108
+ torch_dtype="auto",
109
+ device_map="auto"
110
+ )
111
+
112
+ # prepare the model input
113
+ template = """Please read the following text and answer the question below.
114
+
115
+ <text>
116
+ $DOC$
117
+ </text>
118
+
119
+ $Q$
120
+
121
+ Format your response as follows: "Therefore, the answer is (insert answer here)"."""
122
+ context = "<YOUR CONTEXT HERE>"
123
+ question = "<YOUR QUESTION HERE>"
124
+ prompt = template.replace('$DOC$', context.strip()).replace('$Q$', question.strip())
125
+ messages = [
126
+ {"role": "user", "content": prompt}
127
+ ]
128
+ text = tokenizer.apply_chat_template(
129
+ messages,
130
+ tokenize=False,
131
+ add_generation_prompt=True
132
+ )
133
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
134
+
135
+ # conduct text completion
136
+ generated_ids = model.generate(
137
+ **model_inputs,
138
+ max_new_tokens=10000,
139
+ temperature=0.7,
140
+ top_p=0.95
141
+ )
142
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
143
+
144
+ # parsing thinking content
145
+ try:
146
+ # rindex finding 151649 (</think>)
147
+ index = len(output_ids) - output_ids[::-1].index(151649)
148
+ except ValueError:
149
+ index = 0
150
+
151
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
152
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
153
+
154
+ print("thinking content:", thinking_content)
155
+ print("content:", content)
156
+ ```
157
+
158
+ ## 🗂️ Dataset
159
+
160
+ To construct a challenging RL dataset for verifiable long-context reasoning, we develop [🤗 DocQA-RL-1.6K](https://huggingface.co/datasets/Tongyi-Zhiwen/DocQA-RL-1.6K), which comprises 1.6K DocQA problems across three reasoning domains:
161
+
162
+ (1) Mathematical Reasoning: We use 600 problems from the DocMath dataset, requiring numerical reasoning across long and specialized documents such as financial reports. For DocMath, we sample 75% items from each subset from its valid split for training and 25% for evaluation;
163
+
164
+ (2) Logical Reasoning: We employ DeepSeek-R1 to synthesize 600 multi-choice questions requiring logic analysis of real-world documents spanning legal, financial, insurance, and production domains from our curated collection;
165
+
166
+ (3) Multi-Hop Reasoning: We sample 200 examples from MultiHopRAG and 200 examples from Musique, emphasizing cross-document reasoning.
167
+
168
+ Please download and put the following datasets in `./datasets/` for training and evaluation.
169
+
170
+ RL training data: [🤗 DocQA-RL-1.6K](https://huggingface.co/datasets/Tongyi-Zhiwen/DocQA-RL-1.6K).
171
+
172
+ Evaluation data: [🤗 docmath](https://huggingface.co/datasets/Tongyi-Zhiwen/docmath), [🤗 frames](https://huggingface.co/datasets/Tongyi-Zhiwen/frames), [🤗 longbench](https://huggingface.co/datasets/Tongyi-Zhiwen/longbench).
173
+
174
+ ## 💻 Training
175
+
176
+ We provide the basic demo training code for single stage RL traininig with DAPO.
177
+
178
+ First, we should start a local verifier.
179
+ ```bash
180
+ export CUDA_VISIBLE_DEVICES=0
181
+
182
+ vllm serve "Qwen/Qwen2.5-1.5B-Instruct" \
183
+ --host 0.0.0.0 \
184
+ --port 23547
185
+ ```
186
+
187
+ Then, we start RL training with 4 nodes.
188
+ ```bash
189
+ export PROJ_DIR="<YOUR_PROJ_DIR_HERE>"
190
+ export MASTER_IP="<YOUR_MASTER_IP_HERE>" # ray master ip
191
+ export NNODES=4 # total GPU nodes
192
+ export NODE_RANK=${RANK} # rank of current node
193
+ export PORT=6382
194
+ export WANDB_API_KEY="<YOUR_WANDB_API_KEY_HERE>"
195
+ export WANDB_PROJECT="QwenLong-L1"
196
+ export LLM_JUDGE=Y # 'Y': LLM JUDGE, 'N': RULE BASED
197
+ export VLLM_ATTENTION_BACKEND=FLASH_ATTN
198
+ # verifier
199
+ export VERIFIER_PATH="Qwen/Qwen2.5-1.5B-Instruct"
200
+ export VERIFIER_HOST="<YOUR_VERIFIER_HOST_HERE>"
201
+ export VERIFIER_PORT="23547"
202
+
203
+ ray_start_retry() {
204
+ while true; do
205
+ ray start --address="${MASTER_IP}:${PORT}"
206
+ if [ $? -eq 0 ]; then
207
+ break
208
+ fi
209
+ echo "Failed to connect to master, retrying in 5 seconds..."
210
+ sleep 5
211
+ done
212
+ }
213
+
214
+ check_ray_status() {
215
+ until ray status >/dev/null 2>&1; do
216
+ echo "Waiting for Ray cluster to be ready..."
217
+ sleep 5
218
+ done
219
+ }
220
+
221
+ if [ "$RANK" == "0" ]; then
222
+ echo "Starting HEAD node..."
223
+ ray start --head --port=${PORT}
224
+
225
+ check_ray_status
226
+ echo "Ray head node started successfully"
227
+
228
+ else
229
+ echo "Starting WORKER node..."
230
+ ray_start_retry
231
+
232
+ check_ray_status
233
+ echo "Successfully joined Ray cluster"
234
+ fi
235
+
236
+ if [ "$RANK" == "0" ]; then
237
+ bash ${PROJ_DIR}/scripts/rl_4nodes_dapo.sh 2>&1 | tee ${PROJ_DIR}/logs/rl_log_$(date +%Y%m%d_%H%M%S).txt &
238
+ else
239
+ sleep 30d
240
+ fi
241
+
242
+ wait
243
+ ```
244
+
245
+ ## 📊 Evaluation
246
+
247
+ We conduct evaluation on seven long-context DocQA benchmarks, including multi-hop reasoning benchmarks such as 2WikiMultihopQA, HotpotQA, Musique, NarrativeQA, Qasper, and Frames as well as mathematical reasoning benchmarks like DocMath. We report the maximum of exact match and LLM-judged accuracy as the final score, aligned with the reward function in our RL training process. We use DeepSeek-V3 as the judge model with a temperature of 0.0 to provide a reliable evaluation.
248
+
249
+ ```bash
250
+ # Step 1. Serve the model for evaluation
251
+ export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
252
+ MODEL_NAME="QwenLong-L1-32B"
253
+ MODEL_PATH="Tongyi-Zhiwen/QwenLong-L1-32B"
254
+
255
+ vllm serve ${MODEL_PATH} \
256
+ --port 23547 \
257
+ --api-key "token-abc123" \
258
+ --tensor-parallel-size 8 \
259
+ --gpu-memory-utilization 0.95 \
260
+ --max_model_len 131072 \
261
+ --trust-remote-code
262
+
263
+ # Step 2. Generate model responses for each dataset
264
+ export SERVE_HOST="<YOUR_SERVE_HOST_HERE>" # e.g., 127.0.0.1
265
+ export SERVE_PORT="23547"
266
+ PROJ_DIR="<YOUR_PROJ_DIR_HERE>"
267
+ DATA="<YOUR_DATA_HERE>" # e.g., docmath, frames, 2wikimqa, hotpotqa, musique, narrativeqa, pasper
268
+ python ${PROJ_DIR}/eval/${DATA}.py \
269
+ --save_dir "${PROJ_DIR}/eval/results/${DATA}" \
270
+ --save_file "${MODEL_NAME}" \
271
+ --model "${MODEL_PATH}" \
272
+ --tokenizer "${MODEL_PATH}" \
273
+ --n_proc 16 \
274
+ --api "openai"
275
+
276
+ # Step 3. Verify model responses for each dataset
277
+ export VERIFIER_API="<YOUR_API_KEY_HERE>"
278
+ export VERIFIER_URL="https://api.deepseek.com/v1"
279
+ PROJ_DIR="<YOUR_PROJ_DIR_HERE>"
280
+ DATA="<YOUR_DATA_HERE>" # e.g., docmath, frames, 2wikimqa, hotpotqa, musique, narrativeqa, pasper
281
+ python ${PROJ_DIR}/eval/${DATA}_verify.py \
282
+ --save_dir "${PROJ_DIR}/results/${DATA}" \
283
+ --save_file "${MODEL_NAME}" \
284
+ --judge_model "deepseek-chat" \
285
+ --batch_size 20
286
+ ```
287
+
288
+ ## 📝 Citation
289
+
290
+ If you find this work is relevant with your research or applications, please feel free to cite our work!
291
+ ```
292
+ @article{wan2025qwenlongl1,
293
+ title={QwenLong-L1: : Towards Long-Context Large Reasoning Models with Reinforcement Learning},
294
+ author={Fanqi Wan, Weizhou Shen, Shengyi Liao, Yingcheng Shi, Chenliang Li, Ziyi Yang, Ji Zhang, Fei Huang, Jingren Zhou, Ming Yan},
295
+ journal={arXiv preprint arXiv:xxxx.xxxxx},
296
+ year={2025}
297
+ }
298
+ ```