lbourdois commited on
Commit
3ebf006
·
verified ·
1 Parent(s): 815b07c

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +246 -234
README.md CHANGED
@@ -1,235 +1,247 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M/blob/main/LICENSE
4
- language:
5
- - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-7B
8
- tags:
9
- - chat
10
- library_name: transformers
11
- ---
12
-
13
- # Qwen2.5-7B-Instruct-1M
14
- <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
15
- <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
16
- </a>
17
-
18
- ## Introduction
19
-
20
- Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.
21
-
22
- The model has the following features:
23
- - Type: Causal Language Models
24
- - Training Stage: Pretraining & Post-training
25
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
26
- - Number of Parameters: 7.61B
27
- - Number of Paramaters (Non-Embedding): 6.53B
28
- - Number of Layers: 28
29
- - Number of Attention Heads (GQA): 28 for Q and 4 for KV
30
- - Context Length: Full 1,010,000 tokens and generation 8192 tokens
31
- - We recommend deploying with our custom vLLM, which introduces sparse attention and length extrapolation methods to ensure efficiency and accuracy for long-context tasks. For specific guidance, refer to [this section](#processing-ultra-long-texts).
32
- - You can also use the previous framework that supports Qwen2.5 for inference, but accuracy degradation may occur for sequences exceeding 262,144 tokens.
33
-
34
- For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-1m/), [GitHub](https://github.com/QwenLM/Qwen2.5), [Technical Report](https://huggingface.co/papers/2501.15383), and [Documentation](https://qwen.readthedocs.io/en/latest/).
35
- ## Requirements
36
-
37
- The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
38
-
39
- With `transformers<4.37.0`, you will encounter the following error:
40
- ```
41
- KeyError: 'qwen2'
42
- ```
43
-
44
- ## Quickstart
45
-
46
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
47
-
48
- ```python
49
- from transformers import AutoModelForCausalLM, AutoTokenizer
50
-
51
- model_name = "Qwen/Qwen2.5-7B-Instruct-1M"
52
-
53
- model = AutoModelForCausalLM.from_pretrained(
54
- model_name,
55
- torch_dtype="auto",
56
- device_map="auto"
57
- )
58
- tokenizer = AutoTokenizer.from_pretrained(model_name)
59
-
60
- prompt = "Give me a short introduction to large language model."
61
- messages = [
62
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
63
- {"role": "user", "content": prompt}
64
- ]
65
- text = tokenizer.apply_chat_template(
66
- messages,
67
- tokenize=False,
68
- add_generation_prompt=True
69
- )
70
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
71
-
72
- generated_ids = model.generate(
73
- **model_inputs,
74
- max_new_tokens=512
75
- )
76
- generated_ids = [
77
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
78
- ]
79
-
80
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
81
- ```
82
-
83
- ### Processing Ultra Long Texts
84
-
85
- To enhance processing accuracy and efficiency for long sequences, we have developed an advanced inference framework based on vLLM, incorporating sparse attention and length extrapolation. This approach significantly improves model generation performance for sequences exceeding 256K tokens and achieves a 3 to 7 times speedup for sequences up to 1M tokens.
86
-
87
- Here we provide step-by-step instructions for deploying the Qwen2.5-1M models with our framework.
88
-
89
- #### 1. System Preparation
90
-
91
- To achieve the best performance, we recommend using GPUs with Ampere or Hopper architecture, which support optimized kernels.
92
-
93
- Ensure your system meets the following requirements:
94
-
95
- - **CUDA Version**: 12.1 or 12.3
96
- - **Python Version**: >=3.9 and <=3.12
97
-
98
- **VRAM Requirements:**
99
-
100
- - For processing 1 million-token sequences:
101
- - **Qwen2.5-7B-Instruct-1M**: At least 120GB VRAM (total across GPUs).
102
- - **Qwen2.5-14B-Instruct-1M**: At least 320GB VRAM (total across GPUs).
103
-
104
- If your GPUs do not have sufficient VRAM, you can still use Qwen2.5-1M for shorter tasks.
105
-
106
- #### 2. Install Dependencies
107
-
108
- For now, you need to clone the vLLM repository from our custom branch and install it manually. We are working on getting our branch merged into the main vLLM project.
109
-
110
- ```bash
111
- git clone -b dev/dual-chunk-attn [email protected]:QwenLM/vllm.git
112
- cd vllm
113
- pip install -e . -v
114
- ```
115
-
116
-
117
- #### 3. Launch vLLM
118
-
119
- vLLM supports offline inference or launch an openai-like server.
120
-
121
- **Example of Offline Inference**
122
-
123
- ```python
124
- from transformers import AutoTokenizer
125
- from vllm import LLM, SamplingParams
126
-
127
- # Initialize the tokenizer
128
- tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct-1M")
129
-
130
- # Pass the default decoding hyperparameters of Qwen2.5-7B-Instruct
131
- # max_tokens is for the maximum length for generation.
132
- sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)
133
-
134
- # Input the model name or path. See below for parameter explanation (after the example of openai-like server).
135
- llm = LLM(model="Qwen/Qwen2.5-7B-Instruct-1M",
136
- tensor_parallel_size=4,
137
- max_model_len=1010000,
138
- enable_chunked_prefill=True,
139
- max_num_batched_tokens=131072,
140
- enforce_eager=True,
141
- # quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage.
142
- )
143
-
144
- # Prepare your prompts
145
- prompt = "Tell me something about large language models."
146
- messages = [
147
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
148
- {"role": "user", "content": prompt}
149
- ]
150
- text = tokenizer.apply_chat_template(
151
- messages,
152
- tokenize=False,
153
- add_generation_prompt=True
154
- )
155
-
156
- # generate outputs
157
- outputs = llm.generate([text], sampling_params)
158
-
159
- # Print the outputs.
160
- for output in outputs:
161
- prompt = output.prompt
162
- generated_text = output.outputs[0].text
163
- print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
164
- ```
165
-
166
- **Example of Openai-like Server**
167
-
168
- ```bash
169
- vllm serve Qwen/Qwen2.5-7B-Instruct-1M \
170
- --tensor-parallel-size 4 \
171
- --max-model-len 1010000 \
172
- --enable-chunked-prefill --max-num-batched-tokens 131072 \
173
- --enforce-eager \
174
- --max-num-seqs 1
175
-
176
- # --quantization fp8 # Enabling FP8 quantization for model weights can reduce memory usage.
177
- ```
178
-
179
- Then you can use curl or python to interact with the deployed model.
180
-
181
- **Parameter Explanations:**
182
-
183
- - **`--tensor-parallel-size`**
184
- - Set to the number of GPUs you are using. Max 4 GPUs for the 7B model, and 8 GPUs for the 14B model.
185
-
186
- - **`--max-model-len`**
187
- - Defines the maximum input sequence length. Reduce this value if you encounter Out of Memory issues.
188
-
189
- - **`--max-num-batched-tokens`**
190
- - Sets the chunk size in Chunked Prefill. A smaller value reduces activation memory usage but may slow down inference.
191
- - Recommend 131072 for optimal performance.
192
-
193
- - **`--max-num-seqs`**
194
- - Limits concurrent sequences processed.
195
-
196
- You can also refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage of vLLM.
197
-
198
- #### Troubleshooting:
199
-
200
- 1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache."
201
-
202
- The VRAM reserved for the KV cache is insufficient. Consider reducing the ``max_model_len`` or increasing the ``tensor_parallel_size``. Alternatively, you can reduce ``max_num_batched_tokens``, although this may significantly slow down inference.
203
-
204
- 2. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."
205
-
206
- The VRAM reserved for activation weights is insufficient. You can try setting ``gpu_memory_utilization`` to 0.85 or lower, but be aware that this might reduce the VRAM available for the KV cache.
207
-
208
- 3. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager."
209
-
210
- The input is too lengthy. Consider using a shorter sequence or increasing the ``max_model_len``.
211
-
212
- ## Evaluation & Performance
213
-
214
- Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-1m/) and our [technical report](https://arxiv.org/abs/2501.15383).
215
-
216
- ## Citation
217
-
218
- If you find our work helpful, feel free to give us a cite.
219
-
220
- ```
221
- @misc{qwen2.5-1m,
222
- title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens},
223
- url = {https://qwenlm.github.io/blog/qwen2.5-1m/},
224
- author = {Qwen Team},
225
- month = {January},
226
- year = {2025}
227
- }
228
-
229
- @article{qwen2.5,
230
- title={Qwen2.5-1M Technical Report},
231
- author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
232
- journal={arXiv preprint arXiv:2501.15383},
233
- year={2025}
234
- }
 
 
 
 
 
 
 
 
 
 
 
 
235
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-7B
20
+ tags:
21
+ - chat
22
+ library_name: transformers
23
+ ---
24
+
25
+ # Qwen2.5-7B-Instruct-1M
26
+ <a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
27
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
28
+ </a>
29
+
30
+ ## Introduction
31
+
32
+ Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens. Compared to the Qwen2.5 128K version, Qwen2.5-1M demonstrates significantly improved performance in handling long-context tasks while maintaining its capability in short tasks.
33
+
34
+ The model has the following features:
35
+ - Type: Causal Language Models
36
+ - Training Stage: Pretraining & Post-training
37
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
38
+ - Number of Parameters: 7.61B
39
+ - Number of Paramaters (Non-Embedding): 6.53B
40
+ - Number of Layers: 28
41
+ - Number of Attention Heads (GQA): 28 for Q and 4 for KV
42
+ - Context Length: Full 1,010,000 tokens and generation 8192 tokens
43
+ - We recommend deploying with our custom vLLM, which introduces sparse attention and length extrapolation methods to ensure efficiency and accuracy for long-context tasks. For specific guidance, refer to [this section](#processing-ultra-long-texts).
44
+ - You can also use the previous framework that supports Qwen2.5 for inference, but accuracy degradation may occur for sequences exceeding 262,144 tokens.
45
+
46
+ For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-1m/), [GitHub](https://github.com/QwenLM/Qwen2.5), [Technical Report](https://huggingface.co/papers/2501.15383), and [Documentation](https://qwen.readthedocs.io/en/latest/).
47
+ ## Requirements
48
+
49
+ The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
50
+
51
+ With `transformers<4.37.0`, you will encounter the following error:
52
+ ```
53
+ KeyError: 'qwen2'
54
+ ```
55
+
56
+ ## Quickstart
57
+
58
+ Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
59
+
60
+ ```python
61
+ from transformers import AutoModelForCausalLM, AutoTokenizer
62
+
63
+ model_name = "Qwen/Qwen2.5-7B-Instruct-1M"
64
+
65
+ model = AutoModelForCausalLM.from_pretrained(
66
+ model_name,
67
+ torch_dtype="auto",
68
+ device_map="auto"
69
+ )
70
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
71
+
72
+ prompt = "Give me a short introduction to large language model."
73
+ messages = [
74
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
75
+ {"role": "user", "content": prompt}
76
+ ]
77
+ text = tokenizer.apply_chat_template(
78
+ messages,
79
+ tokenize=False,
80
+ add_generation_prompt=True
81
+ )
82
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
83
+
84
+ generated_ids = model.generate(
85
+ **model_inputs,
86
+ max_new_tokens=512
87
+ )
88
+ generated_ids = [
89
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
90
+ ]
91
+
92
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
93
+ ```
94
+
95
+ ### Processing Ultra Long Texts
96
+
97
+ To enhance processing accuracy and efficiency for long sequences, we have developed an advanced inference framework based on vLLM, incorporating sparse attention and length extrapolation. This approach significantly improves model generation performance for sequences exceeding 256K tokens and achieves a 3 to 7 times speedup for sequences up to 1M tokens.
98
+
99
+ Here we provide step-by-step instructions for deploying the Qwen2.5-1M models with our framework.
100
+
101
+ #### 1. System Preparation
102
+
103
+ To achieve the best performance, we recommend using GPUs with Ampere or Hopper architecture, which support optimized kernels.
104
+
105
+ Ensure your system meets the following requirements:
106
+
107
+ - **CUDA Version**: 12.1 or 12.3
108
+ - **Python Version**: >=3.9 and <=3.12
109
+
110
+ **VRAM Requirements:**
111
+
112
+ - For processing 1 million-token sequences:
113
+ - **Qwen2.5-7B-Instruct-1M**: At least 120GB VRAM (total across GPUs).
114
+ - **Qwen2.5-14B-Instruct-1M**: At least 320GB VRAM (total across GPUs).
115
+
116
+ If your GPUs do not have sufficient VRAM, you can still use Qwen2.5-1M for shorter tasks.
117
+
118
+ #### 2. Install Dependencies
119
+
120
+ For now, you need to clone the vLLM repository from our custom branch and install it manually. We are working on getting our branch merged into the main vLLM project.
121
+
122
+ ```bash
123
+ git clone -b dev/dual-chunk-attn [email protected]:QwenLM/vllm.git
124
+ cd vllm
125
+ pip install -e . -v
126
+ ```
127
+
128
+
129
+ #### 3. Launch vLLM
130
+
131
+ vLLM supports offline inference or launch an openai-like server.
132
+
133
+ **Example of Offline Inference**
134
+
135
+ ```python
136
+ from transformers import AutoTokenizer
137
+ from vllm import LLM, SamplingParams
138
+
139
+ # Initialize the tokenizer
140
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct-1M")
141
+
142
+ # Pass the default decoding hyperparameters of Qwen2.5-7B-Instruct
143
+ # max_tokens is for the maximum length for generation.
144
+ sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512)
145
+
146
+ # Input the model name or path. See below for parameter explanation (after the example of openai-like server).
147
+ llm = LLM(model="Qwen/Qwen2.5-7B-Instruct-1M",
148
+ tensor_parallel_size=4,
149
+ max_model_len=1010000,
150
+ enable_chunked_prefill=True,
151
+ max_num_batched_tokens=131072,
152
+ enforce_eager=True,
153
+ # quantization="fp8", # Enabling FP8 quantization for model weights can reduce memory usage.
154
+ )
155
+
156
+ # Prepare your prompts
157
+ prompt = "Tell me something about large language models."
158
+ messages = [
159
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
160
+ {"role": "user", "content": prompt}
161
+ ]
162
+ text = tokenizer.apply_chat_template(
163
+ messages,
164
+ tokenize=False,
165
+ add_generation_prompt=True
166
+ )
167
+
168
+ # generate outputs
169
+ outputs = llm.generate([text], sampling_params)
170
+
171
+ # Print the outputs.
172
+ for output in outputs:
173
+ prompt = output.prompt
174
+ generated_text = output.outputs[0].text
175
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
176
+ ```
177
+
178
+ **Example of Openai-like Server**
179
+
180
+ ```bash
181
+ vllm serve Qwen/Qwen2.5-7B-Instruct-1M \
182
+ --tensor-parallel-size 4 \
183
+ --max-model-len 1010000 \
184
+ --enable-chunked-prefill --max-num-batched-tokens 131072 \
185
+ --enforce-eager \
186
+ --max-num-seqs 1
187
+
188
+ # --quantization fp8 # Enabling FP8 quantization for model weights can reduce memory usage.
189
+ ```
190
+
191
+ Then you can use curl or python to interact with the deployed model.
192
+
193
+ **Parameter Explanations:**
194
+
195
+ - **`--tensor-parallel-size`**
196
+ - Set to the number of GPUs you are using. Max 4 GPUs for the 7B model, and 8 GPUs for the 14B model.
197
+
198
+ - **`--max-model-len`**
199
+ - Defines the maximum input sequence length. Reduce this value if you encounter Out of Memory issues.
200
+
201
+ - **`--max-num-batched-tokens`**
202
+ - Sets the chunk size in Chunked Prefill. A smaller value reduces activation memory usage but may slow down inference.
203
+ - Recommend 131072 for optimal performance.
204
+
205
+ - **`--max-num-seqs`**
206
+ - Limits concurrent sequences processed.
207
+
208
+ You can also refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage of vLLM.
209
+
210
+ #### Troubleshooting:
211
+
212
+ 1. Encountering the error: "The model's max sequence length (xxxxx) is larger than the maximum number of tokens that can be stored in the KV cache."
213
+
214
+ The VRAM reserved for the KV cache is insufficient. Consider reducing the ``max_model_len`` or increasing the ``tensor_parallel_size``. Alternatively, you can reduce ``max_num_batched_tokens``, although this may significantly slow down inference.
215
+
216
+ 2. Encountering the error: "torch.OutOfMemoryError: CUDA out of memory."
217
+
218
+ The VRAM reserved for activation weights is insufficient. You can try setting ``gpu_memory_utilization`` to 0.85 or lower, but be aware that this might reduce the VRAM available for the KV cache.
219
+
220
+ 3. Encountering the error: "Input prompt (xxxxx tokens) + lookahead slots (0) is too long and exceeds the capacity of the block manager."
221
+
222
+ The input is too lengthy. Consider using a shorter sequence or increasing the ``max_model_len``.
223
+
224
+ ## Evaluation & Performance
225
+
226
+ Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-1m/) and our [technical report](https://arxiv.org/abs/2501.15383).
227
+
228
+ ## Citation
229
+
230
+ If you find our work helpful, feel free to give us a cite.
231
+
232
+ ```
233
+ @misc{qwen2.5-1m,
234
+ title = {Qwen2.5-1M: Deploy Your Own Qwen with Context Length up to 1M Tokens},
235
+ url = {https://qwenlm.github.io/blog/qwen2.5-1m/},
236
+ author = {Qwen Team},
237
+ month = {January},
238
+ year = {2025}
239
+ }
240
+
241
+ @article{qwen2.5,
242
+ title={Qwen2.5-1M Technical Report},
243
+ author={An Yang and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoyan Huang and Jiandong Jiang and Jianhong Tu and Jianwei Zhang and Jingren Zhou and Junyang Lin and Kai Dang and Kexin Yang and Le Yu and Mei Li and Minmin Sun and Qin Zhu and Rui Men and Tao He and Weijia Xu and Wenbiao Yin and Wenyuan Yu and Xiafei Qiu and Xingzhang Ren and Xinlong Yang and Yong Li and Zhiying Xu and Zipeng Zhang},
244
+ journal={arXiv preprint arXiv:2501.15383},
245
+ year={2025}
246
+ }
247
  ```