PIKA665 commited on
Commit
2c903d6
·
verified ·
1 Parent(s): 01f97e3

upload files

Browse files
LICENSE CHANGED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ OPENPANGU MODEL LICENSE AGREEMENT VERSION 1.0
2
+
3
+ This OPENPANGU MODEL LICENSE AGREEMENT VERSION 1.0 (the "Agreement") is a legal agreement between You and Huawei Technologies Co., Ltd. ("Huawei", "We" or "Us"), and it governs Your reproducing, use, modification, and distribution of openPangu as made available by Huawei under this Agreement.
4
+
5
+ By using, reproducing, modifying, distributing, performing or displaying any portion or element of openPangu, or otherwise accepting the terms of this Agreement, You agree to be bound by this Agreement.
6
+
7
+ 1. Definitions.
8
+ 1.1. “openPangu” or “Model” means openPangu large language models and software, including trained model weights, parameters (including optimizer states), accompanying source code and scripts released under this Agreement.
9
+ 1.2. “Derivative Model” means all (1) modifications to the Model, (2) works based on the Model, and (3) any other derivative works of the Model. For clarity, information or content results from operating or otherwise using the Model is not a Derivative Model.
10
+ 1.3. “You” or “Your” means an individual or Legal Entity exercising permissions granted by this Agreement and/or using the Model for any purpose.
11
+ 1.4. “Third Party” or “Third Parties” means individuals or legal entities that are not under common control with Us or You.
12
+
13
+ 2. License Grant. Subject to Your full compliance with the terms and conditions of this Agreement, We hereby grant to You a perpetual, worldwide, non-exclusive, non-transferable, no-charge, royalty-free license (except as stated in Section 3) to use, reproduce, modify, and distribute the Model.
14
+
15
+ 3. Conditions for License Grant. You represent and warrant that You will not, access, download, install, run, deploy, integrate, modify, or otherwise use the Model, directly or indirectly, within the European Union.
16
+
17
+
18
+ 4. Redistribution.
19
+ 4.1. If You distribute the Model or Derivative Model, You shall retain in Your distribution (1) a copy of this agreement, and (2) all copyright notices and other notices of origin included in the Model that are applicable to Your distribution.
20
+ 4.2. Further, if You distribute or make available to Third Parties a product or service (including another AI model) based on the Model, You are required to (1) display the acknowledgement “Powered by openPangu” and (2) include a trademark notice “openPangu is a trademark of Huawei Technologies Co., Ltd.” on related webpages, user manuals, product documentations or other advertising materials mentioning features of the Model.
21
+ 4.3. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for Derivative Model made by You as a whole, provided Your use, reproduction, and distribution of the Model otherwise complies with the terms and conditions of this Agreement.
22
+
23
+ 5. Ownership. We do not claim ownership to any information or content generated using the Model or Derivative Model that are made by You. You are solely responsible for evaluating the accuracy and appropriateness of such information or content for Your use case.
24
+
25
+ 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of Huawei, except as required for complying with Section 4.2.
26
+
27
+ 7. Indemnity. You will indemnify and hold harmless Huawei from and against any claim by any third party arising out of or related to Your use or distribution of the Model or Derivative Model made by You (e.g. a violation against Section 3). For avoidance of doubt, “third party” in this clause include supervisory authorities.
28
+
29
+ 8. THE MODEL IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, NONINFRINGEMENT, ACCURACY, OR THE ABSENCE OF LATENT OR OTHER DEFECTS OR ERRORS, WHETHER OR NOT DISCOVERABLE, ALL TO THE GREATEST EXTENT PERMISSIBLE UNDER APPLICABLE LAW.
30
+
31
+ 9. IN NO EVENT SHALL WE BE LIABLE TO YOU FOR ANY DAMAGES, INCLUDING, BUT NOT LIMITED TO ANY DIRECT, OR INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING FROM YOUR USE OR INABILITY TO USE THE MODEL, IN WHOLE OR IN PART, NO MATTER HOW IT’S CAUSED OR THE LEGAL THEORY IT IS BASED ON, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
32
+
33
+
34
+ END OF THE TERMS AND CONDITIONS
Open Source Software Notice ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ OPEN SOURCE SOFTWARE NOTICE
2
+
3
+ Please note we provide an open source software notice along with this product and/or this product firmware (in the following just “this product”). The open source software licenses are granted by the respective right holders. And the open source licenses prevail all other license information with regard to the respective open source software contained in the product, including but not limited to End User Software Licensing Agreement. This notice is provided on behalf of Huawei Technologies Co. Ltd. and any of its local subsidiaries which may have provided this product to you in your local country.
4
+
5
+ Warranty Disclaimer
6
+ THE OPEN SOURCE SOFTWARE IN THIS PRODUCT IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS.
7
+
8
+ Copyright Notice and License Texts
9
+
10
+ Software: transformers 4.53.2
11
+ Copyright notice:
12
+ Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
13
+
14
+ License Text:
15
+ ----------------------------------------
16
+
17
+ Apache License
18
+ Version 2.0, January 2004
19
+ http://www.apache.org/licenses/
20
+
21
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
22
+
23
+ 1. Definitions.
24
+
25
+ "License" shall mean the terms and conditions for use, reproduction,
26
+ and distribution as defined by Sections 1 through 9 of this document.
27
+
28
+ "Licensor" shall mean the copyright owner or entity authorized by
29
+ the copyright owner that is granting the License.
30
+
31
+ "Legal Entity" shall mean the union of the acting entity and all
32
+ other entities that control, are controlled by, or are under common
33
+ control with that entity. For the purposes of this definition,
34
+ "control" means (i) the power, direct or indirect, to cause the
35
+ direction or management of such entity, whether by contract or
36
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
37
+ outstanding shares, or (iii) beneficial ownership of such entity.
38
+
39
+ "You" (or "Your") shall mean an individual or Legal Entity
40
+ exercising permissions granted by this License.
41
+
42
+ "Source" form shall mean the preferred form for making modifications,
43
+ including but not limited to software source code, documentation
44
+ source, and configuration files.
45
+
46
+ "Object" form shall mean any form resulting from mechanical
47
+ transformation or translation of a Source form, including but
48
+ not limited to compiled object code, generated documentation,
49
+ and conversions to other media types.
50
+
51
+ "Work" shall mean the work of authorship, whether in Source or
52
+ Object form, made available under the License, as indicated by a
53
+ copyright notice that is included in or attached to the work
54
+ (an example is provided in the Appendix below).
55
+
56
+ "Derivative Works" shall mean any work, whether in Source or Object
57
+ form, that is based on (or derived from) the Work and for which the
58
+ editorial revisions, annotations, elaborations, or other modifications
59
+ represent, as a whole, an original work of authorship. For the purposes
60
+ of this License, Derivative Works shall not include works that remain
61
+ separable from, or merely link (or bind by name) to the interfaces of,
62
+ the Work and Derivative Works thereof.
63
+
64
+ "Contribution" shall mean any work of authorship, including
65
+ the original version of the Work and any modifications or additions
66
+ to that Work or Derivative Works thereof, that is intentionally
67
+ submitted to Licensor for inclusion in the Work by the copyright owner
68
+ or by an individual or Legal Entity authorized to submit on behalf of
69
+ the copyright owner. For the purposes of this definition, "submitted"
70
+ means any form of electronic, verbal, or written communication sent
71
+ to the Licensor or its representatives, including but not limited to
72
+ communication on electronic mailing lists, source code control systems,
73
+ and issue tracking systems that are managed by, or on behalf of, the
74
+ Licensor for the purpose of discussing and improving the Work, but
75
+ excluding communication that is conspicuously marked or otherwise
76
+ designated in writing by the copyright owner as "Not a Contribution."
77
+
78
+ "Contributor" shall mean Licensor and any individual or Legal Entity
79
+ on behalf of whom a Contribution has been received by Licensor and
80
+ subsequently incorporated within the Work.
81
+
82
+ 2. Grant of Copyright License. Subject to the terms and conditions of
83
+ this License, each Contributor hereby grants to You a perpetual,
84
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
85
+ copyright license to reproduce, prepare Derivative Works of,
86
+ publicly display, publicly perform, sublicense, and distribute the
87
+ Work and such Derivative Works in Source or Object form.
88
+
89
+ 3. Grant of Patent License. Subject to the terms and conditions of
90
+ this License, each Contributor hereby grants to You a perpetual,
91
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
92
+ (except as stated in this section) patent license to make, have made,
93
+ use, offer to sell, sell, import, and otherwise transfer the Work,
94
+ where such license applies only to those patent claims licensable
95
+ by such Contributor that are necessarily infringed by their
96
+ Contribution(s) alone or by combination of their Contribution(s)
97
+ with the Work to which such Contribution(s) was submitted. If You
98
+ institute patent litigation against any entity (including a
99
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
100
+ or a Contribution incorporated within the Work constitutes direct
101
+ or contributory patent infringement, then any patent licenses
102
+ granted to You under this License for that Work shall terminate
103
+ as of the date such litigation is filed.
104
+
105
+ 4. Redistribution. You may reproduce and distribute copies of the
106
+ Work or Derivative Works thereof in any medium, with or without
107
+ modifications, and in Source or Object form, provided that You
108
+ meet the following conditions:
109
+
110
+ (a) You must give any other recipients of the Work or
111
+ Derivative Works a copy of this License; and
112
+
113
+ (b) You must cause any modified files to carry prominent notices
114
+ stating that You changed the files; and
115
+
116
+ (c) You must retain, in the Source form of any Derivative Works
117
+ that You distribute, all copyright, patent, trademark, and
118
+ attribution notices from the Source form of the Work,
119
+ excluding those notices that do not pertain to any part of
120
+ the Derivative Works; and
121
+
122
+ (d) If the Work includes a "NOTICE" text file as part of its
123
+ distribution, then any Derivative Works that You distribute must
124
+ include a readable copy of the attribution notices contained
125
+ within such NOTICE file, excluding those notices that do not
126
+ pertain to any part of the Derivative Works, in at least one
127
+ of the following places: within a NOTICE text file distributed
128
+ as part of the Derivative Works; within the Source form or
129
+ documentation, if provided along with the Derivative Works; or,
130
+ within a display generated by the Derivative Works, if and
131
+ wherever such third-party notices normally appear. The contents
132
+ of the NOTICE file are for informational purposes only and
133
+ do not modify the License. You may add Your own attribution
134
+ notices within Derivative Works that You distribute, alongside
135
+ or as an addendum to the NOTICE text from the Work, provided
136
+ that such additional attribution notices cannot be construed
137
+ as modifying the License.
138
+
139
+ You may add Your own copyright statement to Your modifications and
140
+ may provide additional or different license terms and conditions
141
+ for use, reproduction, or distribution of Your modifications, or
142
+ for any such Derivative Works as a whole, provided Your use,
143
+ reproduction, and distribution of the Work otherwise complies with
144
+ the conditions stated in this License.
145
+
146
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
147
+ any Contribution intentionally submitted for inclusion in the Work
148
+ by You to the Licensor shall be under the terms and conditions of
149
+ this License, without any additional terms or conditions.
150
+ Notwithstanding the above, nothing herein shall supersede or modify
151
+ the terms of any separate license agreement you may have executed
152
+ with Licensor regarding such Contributions.
153
+
154
+ 6. Trademarks. This License does not grant permission to use the trade
155
+ names, trademarks, service marks, or product names of the Licensor,
156
+ except as required for reasonable and customary use in describing the
157
+ origin of the Work and reproducing the content of the NOTICE file.
158
+
159
+ 7. Disclaimer of Warranty. Unless required by applicable law or
160
+ agreed to in writing, Licensor provides the Work (and each
161
+ Contributor provides its Contributions) on an "AS IS" BASIS,
162
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
163
+ implied, including, without limitation, any warranties or conditions
164
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
165
+ PARTICULAR PURPOSE. You are solely responsible for determining the
166
+ appropriateness of using or redistributing the Work and assume any
167
+ risks associated with Your exercise of permissions under this License.
168
+
169
+ 8. Limitation of Liability. In no event and under no legal theory,
170
+ whether in tort (including negligence), contract, or otherwise,
171
+ unless required by applicable law (such as deliberate and grossly
172
+ negligent acts) or agreed to in writing, shall any Contributor be
173
+ liable to You for damages, including any direct, indirect, special,
174
+ incidental, or consequential damages of any character arising as a
175
+ result of this License or out of the use or inability to use the
176
+ Work (including but not limited to damages for loss of goodwill,
177
+ work stoppage, computer failure or malfunction, or any and all
178
+ other commercial damages or losses), even if such Contributor
179
+ has been advised of the possibility of such damages.
180
+
181
+ 9. Accepting Warranty or Additional Liability. While redistributing
182
+ the Work or Derivative Works thereof, You may choose to offer,
183
+ and charge a fee for, acceptance of support, warranty, indemnity,
184
+ or other liability obligations and/or rights consistent with this
185
+ License. However, in accepting such obligations, You may act only
186
+ on Your own behalf and on Your sole responsibility, not on behalf
187
+ of any other Contributor, and only if You agree to indemnify,
188
+ defend, and hold each Contributor harmless for any liability
189
+ incurred by, or claims asserted against, such Contributor by reason
190
+ of your accepting any such warranty or additional liability.
191
+
192
+ END OF TERMS AND CONDITIONS
193
+
194
+ APPENDIX: How to apply the Apache License to your work.
195
+
196
+ To apply the Apache License to your work, attach the following
197
+ boilerplate notice, with the fields enclosed by brackets "[]"
198
+ replaced with your own identifying information. (Don't include
199
+ the brackets!) The text should be enclosed in the appropriate
200
+ comment syntax for the file format. We also recommend that a
201
+ file or class name and description of purpose be included on the
202
+ same "printed page" as the copyright notice for easier
203
+ identification within third-party archives.
204
+
205
+ Copyright [yyyy] [name of copyright owner]
206
+
207
+ Licensed under the Apache License, Version 2.0 (the "License");
208
+ you may not use this file except in compliance with the License.
209
+ You may obtain a copy of the License at
210
+
211
+ http://www.apache.org/licenses/LICENSE-2.0
212
+
213
+ Unless required by applicable law or agreed to in writing, software
214
+ distributed under the License is distributed on an "AS IS" BASIS,
215
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
216
+ See the License for the specific language governing permissions and
217
+ limitations under the License.
218
+
README.md CHANGED
@@ -1,10 +1,3 @@
1
- ---
2
- license: other
3
- license_name: openpangu
4
- license_link: LICENSE
5
- ---
6
- GPU version of https://ai.gitcode.com/ascend-tribe/openpangu-embedded-1b-model/tree/main
7
-
8
  # 开源盘古 Embedded-1B
9
  中文 | [English](README_EN.md)
10
 
 
 
 
 
 
 
 
 
1
  # 开源盘古 Embedded-1B
2
  中文 | [English](README_EN.md)
3
 
README_EN.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **openPangu-Embedded-1B**
2
+
3
+ [中文](README.md) | English
4
+
5
+ ## 1. Introduction
6
+
7
+ The openPangu-Embedded-1B is an efficient language model trained from scratch based on the Ascend NPU, with 1B parameters (excluding vocabulary embedding). The model employs a 26-layer dense architecture and was trained on approximately 10T tokens. Through model architecture design for Ascend Atlas 200I A2, optimized data and training strategies, openPangu-Embedded-1B achieves high precision while meeting the requirements for edge-side deployment.
8
+
9
+ ## 2. Model Architecture
10
+
11
+ The openPangu-Embedded-1B is a high-efficiency fast-thinking language model designed for operation on edge devices, with support for the Ascend Atlas 200I A2.
12
+
13
+
14
+ | | openPangu-Embedded-1B |
15
+ | :---------------------------: | :----------------: |
16
+ | **Architecture** | Dense |
17
+ | **Parameters (Non-Embedding)** | 1B |
18
+ | **Number of Layers** | 26 |
19
+ | **Hidden Dimension** | 1536 |
20
+ | **Attention Mechanism** | GQA |
21
+ | **Number of Attention Heads** | 12 for Q, 6 for KV |
22
+ | **Vocabulary Size** | 153k |
23
+ | **Context Length (Natively)** | 32k |
24
+ | **Training Tokens** | 10T |
25
+
26
+
27
+
28
+ ## 3. Results
29
+
30
+ | Benchmark | Metric | Non-thinking |
31
+ |:------------------:|:----------:|:--------:|
32
+ | **General** | |
33
+ | MMLU | Acc | 60.72 |
34
+ | CMMLU | Acc | 51.99 |
35
+ | C-Eval | Acc | 60.98 |
36
+ | IFEval | Prompt Strict | 56.56 |
37
+ | CLUEWSC | Acc | 68.55 |
38
+ | **Math & Reasoning** | |
39
+ | GSM8K | Acc | 66.72 |
40
+ | MATH-500 | Acc | 52.00 |
41
+ | DROP | F1 | 50.31 |
42
+ | **Coding** | |
43
+ | MBPP | Pass@1 | 54.09 |
44
+ | HumanEval | Pass@1 | 56.71 |
45
+
46
+ **Note:** The system prompt is left empty.
47
+
48
+
49
+ ## 4. Deployment
50
+
51
+ ### 4.1 Environment
52
+
53
+ ##### Hardware Requirements
54
+
55
+ Atlas 800T A2 (64GB), please refer to [[Atlas 800T A2](https://www.hiascend.com/hardware/firmware-drivers/community?product=4&model=26&cann=8.2.RC1.alpha003&driver=Ascend+HDK+25.0.RC1)] for obtaining the driver and firmware installation packages.
56
+
57
+ #### System Requirements & Dependencies
58
+
59
+ - System: Linux (OpenEuler ≥ 24.03 recommended)
60
+ - CANN==8.1.RC1: [CANN Install](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/82RC1alpha002/softwareinst/instg/instg_0001.html?Mode=PmIns&OS=Ubuntu&Software=cannToolKit)
61
+ - python==3.10
62
+ - torch==2.1.0
63
+ - torch-npu==2.1.0.post12
64
+ - transformers==4.53.2
65
+
66
+ The above software environment has been verified, and theoretically supports newer versions. For any questions, please submit an issue.
67
+
68
+ ### 4.2 Integrity Check
69
+
70
+ Please refer to the following methods to verify the integrity of the downloaded content. The hash values are stored in the `checklist.chk` file.
71
+
72
+ ```
73
+ #!/usr/bin/env bash
74
+ ARCH=$(uname -m)
75
+ MODEL_PATH="${TARGET_FOLDER}/${MODEL_FOLDER_PATH}"
76
+ cd "$MODEL_PATH" || exit 1
77
+ if [ "$ARCH" = "arm64" ]; then
78
+ sha256sum checklist.chk
79
+ else
80
+ sha256sum -c checklist.chk
81
+ fi
82
+ ```
83
+
84
+ ### 4.3 Inference Examples
85
+ The following provides a simple inference example of openPangu-Embedded-1B based on the `transformers` framework:
86
+ >Please modify generate.py and add the model path before running.
87
+ ```bash
88
+ cd inference
89
+ python generate.py
90
+ ```
91
+ The openPangu-Embedded-1B model inference has been adapted for Ascend MindIE version 2.2.T10 (to be released soon), and can be deployed on OrangePi AIpro (Ascend Atlas 200I A2) for inference. The adapted package will be available for download on [Ascend Community ModelZoo](https://gitee.com/ascend/ModelZoo-PyTorch/blob/master/MindIE/LLM/Pangu/openPangu-Embedded-1B-OrangePi/README.md). Before downloading the image, you need to apply for permissions. Please wait patiently until the permission application is approved, then follow the guidelines to download the corresponding image file and installation guide to complete the inference deployment.
92
+
93
+ ## 5. Model License
94
+
95
+ Unless otherwise noted, openPangu-Embedded-1B model is licensed under the terms and conditions of OPENPANGU MODEL LICENSE AGREEMENT VERSION 1.0, which is intended to be used permissively and enable the further development of artificial intelligence technologies. Please refer to the [LICENSE](LICENSE) file located in the root directory of the model repository for details.
96
+
97
+ ## 6. Disclaimer
98
+
99
+ Due to the technical limitations inherent in the technology on which the openPangu-Embedded-1B (“Model”) relies and the fact that the artificial intelligence generated content is automatically produced by Model, Huawei cannot make any guarantees regarding the following matters:
100
+ - The output of this Model is automatically generated via AI algorithms, it does not rule out the possibility that some of the information may be flawed, unreasonable, or cause discomfort, and the generated content does not represent Huawei's attitude or standpoint;
101
+ - There is no guarantee that this Model is 100% accurate, reliable, functional, timely, secure and safety, error-free, uninterrupted, continuously stable, or free of any faults;
102
+ - The output of this Model does not constitute any advices or decisions for you, and it does not guarantee the authenticity, completeness, accuracy, timeliness, legality, functionality, or practicality of the generated content. The generated content cannot replace professionals in medical, legal, and other fields in answering your questions. The generated content is for your reference only and does not represent any attitude, standpoint, or position of Huawei. You need to make independent judgments based on your actual situation, and Huawei does not assume any responsibilities.
103
+
104
+
105
+ ## 7. Contact Us
106
+ If you have any comments or suggestions, please submit an issue or contact [email protected].
checklist.chk ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 5d0c201df44b8bf3e7f7db5485177ea89327f1b591dedccc79858bde12ebef16 *./config.json
2
+ 7694a0e7b59d7ec2eeebc2fd058f02fe4dc4464b27f82839fc9f425a88555a3a *./configuration_openpangu_dense.py
3
+ a12bff27a61421a0dddff6d814d6a512d423d466f7fdec406460e45eaca2e7ce *./generation_config.json
4
+ 58f15aa7474fcb08d59156d6ecf28df23f187cc84a912a66b2f1d06053dcc988 *./inference/generate.py
5
+ 10b12467031fcfbce46f280245aa7e24959b912bfe8bbd4f6a44168d012b565e *./model.safetensors
6
+ f15eaf322af8a0b0f16b26795eb68af836179413d3dbfa4dc44505db6c8b0d6f *./modeling_openpangu_dense.py
7
+ c1f2d87f855b994039c52b1e83c8a7f3d71a2d1eb52946c4a2e862e99f19d8b3 *./modular_openpangu_dense.py
8
+ b34cf5e7c7660889303b6e2d0a346c440356385c9db551d06f6615cf9fc600d1 *./special_tokens_map.json
9
+ c98602d6d1f61792a8bd3393972bbbe7409a205c0bb6299394c74287c26bd723 *./tokenization_openpangu.py
10
+ 6b16f1558c0cd4ae6ef1a2c605713be0a514f50e1ce2d2c878979ce988c148ec *./tokenizer.model
11
+ acb88eac57f8765fedf34e9c10bc16d55c46f0902b0fea74fbf041daca2667ae *./tokenizer_config.json
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "PanguEmbeddedForCausalLM"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "configuration_openpangu_dense.PanguEmbeddedConfig",
7
+ "AutoModel": "modeling_openpangu_dense.PanguEmbeddedModel",
8
+ "AutoModelForCausalLM": "modeling_openpangu_dense.PanguEmbeddedForCausalLM"
9
+ },
10
+ "bias": true,
11
+ "attention_dropout": 0.0,
12
+ "bos_token_id": 1,
13
+ "pad_token_id": 0,
14
+ "eos_token_id": 45892,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 1536,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 6144,
19
+ "max_position_embeddings": 32768,
20
+ "model_type": "PanguEmbedded",
21
+ "num_attention_heads": 12,
22
+ "num_hidden_layers": 26,
23
+ "num_key_value_heads": 6,
24
+ "rms_norm_eps": 1e-05,
25
+ "rope_theta": 4000000.0,
26
+ "tie_word_embeddings": true,
27
+ "torch_dtype": "bfloat16",
28
+ "transformers_version": "4.53.2",
29
+ "use_cache": true,
30
+ "vocab_size": 153376
31
+ }
configuration_openpangu_dense.py ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+
4
+ from transformers.utils import logging
5
+ from transformers.configuration_utils import PretrainedConfig
6
+
7
+
8
+ logger = logging.get_logger(__name__)
9
+
10
+
11
+ class PanguEmbeddedConfig(PretrainedConfig):
12
+
13
+ model_type = "PanguEmbedded"
14
+ _auto_class = "AutoConfig"
15
+
16
+ def __init__(
17
+ self,
18
+ vocab_size=153376,
19
+ hidden_size=4096,
20
+ intermediate_size=12800,
21
+ num_hidden_layers=34,
22
+ num_attention_heads=32,
23
+ num_key_value_heads=8,
24
+ hidden_act="silu",
25
+ max_position_embeddings=32768,
26
+ initializer_range=0.02,
27
+ rms_norm_eps=1e-5,
28
+ use_cache=True,
29
+ pad_token_id=0,
30
+ bos_token_id=1,
31
+ eos_token_id=45892,
32
+ tie_word_embeddings=False,
33
+ rope_theta=16000000.0,
34
+ bias=True,
35
+ **kwargs,
36
+ ):
37
+ self.vocab_size = vocab_size
38
+ self.max_position_embeddings = max_position_embeddings
39
+ self.hidden_size = hidden_size
40
+ self.intermediate_size = intermediate_size
41
+ self.num_hidden_layers = num_hidden_layers
42
+ self.num_attention_heads = num_attention_heads
43
+ self.num_key_value_heads = num_key_value_heads
44
+ self.hidden_act = hidden_act
45
+ self.initializer_range = initializer_range
46
+ self.rms_norm_eps = rms_norm_eps
47
+ self.use_cache = use_cache
48
+ self.rope_theta = rope_theta
49
+ self.bias = bias
50
+ super().__init__(
51
+ pad_token_id=pad_token_id,
52
+ bos_token_id=bos_token_id,
53
+ eos_token_id=eos_token_id,
54
+ tie_word_embeddings=tie_word_embeddings,
55
+ **kwargs,
56
+ )
generate.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+
4
+ from transformers import AutoModelForCausalLM, AutoTokenizer
5
+ from transformers import GenerationConfig
6
+
7
+ model_local_path = "path_to_openPangu-Embedded-1B"
8
+
9
+ # load the tokenizer and the model
10
+ tokenizer = AutoTokenizer.from_pretrained(
11
+ model_local_path,
12
+ use_fast=False,
13
+ trust_remote_code=True,
14
+ local_files_only=True
15
+ )
16
+
17
+ model = AutoModelForCausalLM.from_pretrained(
18
+ model_local_path,
19
+ trust_remote_code=True,
20
+ torch_dtype="auto",
21
+ device_map="auto",
22
+ local_files_only=True
23
+ )
24
+
25
+ # prepare the model input
26
+ sys_prompt = "你必须严格遵守法律法规和社会道德规范。" \
27
+ "生成任何内容时,都应避免涉及暴力、色情、恐怖主义、种族歧视、性别歧视等不当内容。" \
28
+ "一旦检测到输入或输出有此类倾向,应拒绝回答并发出警告。例如,如果输入内容包含暴力威胁或色情描述," \
29
+ "应返回错误信息:“您的输入包含不当内容,无法处理。”"
30
+
31
+ prompt = "Give me a short introduction to large language model."
32
+ messages = [
33
+ {"role": "system", "content": sys_prompt}, # define your system prompt here
34
+ {"role": "user", "content": prompt}
35
+ ]
36
+ text = tokenizer.apply_chat_template(
37
+ messages,
38
+ tokenize=False,
39
+ add_generation_prompt=True
40
+ )
41
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
42
+
43
+ # conduct text completion
44
+ outputs = model.generate(**model_inputs, max_new_tokens=32768, eos_token_id=45892, return_dict_in_generate=True)
45
+
46
+ input_length = model_inputs.input_ids.shape[1]
47
+ generated_tokens = outputs.sequences[:, input_length:]
48
+ content = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
49
+
50
+ print("\ncontent:", content)
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "do_sample": false,
4
+ "bos_token_id": 1,
5
+ "pad_token_id": 0,
6
+ "eos_token_id": 45892,
7
+ "temperature": 1.0,
8
+ "top_k": 0,
9
+ "top_p": 0.8,
10
+ "transformers_version": "4.53.2"
11
+ }
gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
inference/generate.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+
4
+ from transformers import AutoModelForCausalLM, AutoTokenizer
5
+ from transformers import GenerationConfig
6
+
7
+ model_local_path = "path_to_openPangu-Embedded-1B"
8
+
9
+ # load the tokenizer and the model
10
+ tokenizer = AutoTokenizer.from_pretrained(
11
+ model_local_path,
12
+ use_fast=False,
13
+ trust_remote_code=True,
14
+ local_files_only=True
15
+ )
16
+
17
+ model = AutoModelForCausalLM.from_pretrained(
18
+ model_local_path,
19
+ trust_remote_code=True,
20
+ torch_dtype="auto",
21
+ device_map="auto",
22
+ local_files_only=True
23
+ )
24
+
25
+ # prepare the model input
26
+ sys_prompt = "你必须严格遵守法律法规和社会道德规范。" \
27
+ "生成任何内容时,都应避免涉及暴力、色情、恐怖主义、种族歧视、性别歧视等不当内容。" \
28
+ "一旦检测到输入或输出有此类倾向,应拒绝回答并发出警告。例如,如果输入内容包含暴力威胁或色情描述," \
29
+ "应返回错误信息:“您的输入包含不当内容,无法处理。”"
30
+
31
+ prompt = "Give me a short introduction to large language model."
32
+ messages = [
33
+ {"role": "system", "content": sys_prompt}, # define your system prompt here
34
+ {"role": "user", "content": prompt}
35
+ ]
36
+ text = tokenizer.apply_chat_template(
37
+ messages,
38
+ tokenize=False,
39
+ add_generation_prompt=True
40
+ )
41
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
42
+
43
+ # conduct text completion
44
+ outputs = model.generate(**model_inputs, max_new_tokens=32768, eos_token_id=45892, return_dict_in_generate=True)
45
+
46
+ input_length = model_inputs.input_ids.shape[1]
47
+ generated_tokens = outputs.sequences[:, input_length:]
48
+ content = tokenizer.decode(generated_tokens[0], skip_special_tokens=True)
49
+
50
+ print("\ncontent:", content)
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10b12467031fcfbce46f280245aa7e24959b912bfe8bbd4f6a44168d012b565e
3
+ size 2783034328
modeling_openpangu_dense.py ADDED
@@ -0,0 +1,585 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from modular_openpangu_dense.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_openpangu_dense.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+
8
+ # coding=utf-8
9
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
10
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
11
+ #
12
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
13
+ # and OPT implementations in this library. It has been modified from its
14
+ # original forms to accommodate minor architectural differences compared
15
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
16
+ #
17
+ # Licensed under the Apache License, Version 2.0 (the "License");
18
+ # you may not use this file except in compliance with the License.
19
+ # You may obtain a copy of the License at
20
+ #
21
+ # http://www.apache.org/licenses/LICENSE-2.0
22
+ #
23
+ # Unless required by applicable law or agreed to in writing, software
24
+ # distributed under the License is distributed on an "AS IS" BASIS,
25
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26
+ # See the License for the specific language governing permissions and
27
+ # limitations under the License.
28
+
29
+ from typing import Callable, Optional, Union
30
+
31
+ import torch
32
+ from torch import nn
33
+
34
+ import torch_npu
35
+ from torch_npu.contrib import transfer_to_npu
36
+ if "910" in torch.npu.get_device_name():
37
+ NPU_ATTN_INFR = True
38
+ print("[INFO] torch_npu detected. Using NPU fused infer attention.")
39
+ else:
40
+ NPU_ATTN_INFR = False
41
+
42
+ from transformers.activations import ACT2FN
43
+ from transformers.cache_utils import Cache, DynamicCache
44
+ from transformers.generation import GenerationMixin
45
+ from transformers.masking_utils import create_causal_mask
46
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
47
+ from transformers.modeling_layers import GradientCheckpointingLayer
48
+ from transformers.modeling_outputs import (
49
+ BaseModelOutputWithPast,
50
+ CausalLMOutputWithPast,
51
+ SequenceClassifierOutputWithPast,
52
+ )
53
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
54
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
55
+ from transformers.processing_utils import Unpack
56
+ from transformers.utils import LossKwargs, auto_docstring, can_return_tuple, logging
57
+ from .configuration_openpangu_dense import PanguEmbeddedConfig
58
+
59
+
60
+ logger = logging.get_logger(__name__)
61
+
62
+
63
+ class PanguEmbeddedRMSNorm(nn.Module):
64
+ def __init__(self, hidden_size, eps=1e-6):
65
+ """
66
+ PanguEmbeddedRMSNorm is equivalent to T5LayerNorm
67
+ """
68
+ super().__init__()
69
+ self.weight = nn.Parameter(torch.ones(hidden_size))
70
+ self.variance_epsilon = eps
71
+
72
+ def forward(self, hidden_states):
73
+ input_dtype = hidden_states.dtype
74
+ hidden_states = hidden_states.to(torch.float32)
75
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
76
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
77
+ return self.weight * hidden_states.to(input_dtype)
78
+
79
+ def extra_repr(self):
80
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
81
+
82
+
83
+ class PanguEmbeddedRotaryEmbedding(nn.Module):
84
+ def __init__(self, config: PanguEmbeddedConfig, device=None):
85
+ super().__init__()
86
+ # BC: "rope_type" was originally "type"
87
+ if hasattr(config, "rope_scaling") and config.rope_scaling is not None:
88
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
89
+ else:
90
+ self.rope_type = "default"
91
+ self.max_seq_len_cached = config.max_position_embeddings
92
+ self.original_max_seq_len = config.max_position_embeddings
93
+
94
+ self.config = config
95
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
96
+
97
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
98
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
99
+ self.original_inv_freq = self.inv_freq
100
+
101
+ @torch.no_grad()
102
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
103
+ def forward(self, x, position_ids):
104
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
105
+ position_ids_expanded = position_ids[:, None, :].float()
106
+
107
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
108
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
109
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
110
+ emb = torch.cat((freqs, freqs), dim=-1)
111
+ cos = emb.cos() * self.attention_scaling
112
+ sin = emb.sin() * self.attention_scaling
113
+
114
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
115
+
116
+
117
+ def rotate_half(x):
118
+ """Rotates half the hidden dims of the input."""
119
+ x1 = x[..., : x.shape[-1] // 2]
120
+ x2 = x[..., x.shape[-1] // 2 :]
121
+ return torch.cat((-x2, x1), dim=-1)
122
+
123
+
124
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
125
+ """Applies Rotary Position Embedding to the query and key tensors.
126
+
127
+ Args:
128
+ q (`torch.Tensor`): The query tensor.
129
+ k (`torch.Tensor`): The key tensor.
130
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
131
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
132
+ position_ids (`torch.Tensor`, *optional*):
133
+ Deprecated and unused.
134
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
135
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
136
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
137
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
138
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
139
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
140
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
141
+ Returns:
142
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
143
+ """
144
+ cos = cos.unsqueeze(unsqueeze_dim)
145
+ sin = sin.unsqueeze(unsqueeze_dim)
146
+ q_embed = (q * cos) + (rotate_half(q) * sin)
147
+ k_embed = (k * cos) + (rotate_half(k) * sin)
148
+ return q_embed, k_embed
149
+
150
+
151
+ class PanguEmbeddedMLP(nn.Module):
152
+ def __init__(self, config):
153
+ super().__init__()
154
+ self.config = config
155
+ self.hidden_size = config.hidden_size
156
+ self.intermediate_size = config.intermediate_size
157
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
158
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
159
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
160
+ self.act_fn = ACT2FN[config.hidden_act]
161
+
162
+ def forward(self, x):
163
+ down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
164
+ return down_proj
165
+
166
+
167
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
168
+ """
169
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
170
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
171
+ """
172
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
173
+ if n_rep == 1:
174
+ return hidden_states
175
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
176
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
177
+
178
+
179
+ def eager_attention_forward(
180
+ module: nn.Module,
181
+ query: torch.Tensor,
182
+ key: torch.Tensor,
183
+ value: torch.Tensor,
184
+ attention_mask: Optional[torch.Tensor],
185
+ scaling: float,
186
+ dropout: float = 0.0,
187
+ **kwargs,
188
+ ):
189
+ key_states = repeat_kv(key, module.num_key_value_groups)
190
+ value_states = repeat_kv(value, module.num_key_value_groups)
191
+
192
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
193
+ if attention_mask is not None:
194
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
195
+ attn_weights = attn_weights + causal_mask
196
+
197
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
198
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
199
+ attn_output = torch.matmul(attn_weights, value_states)
200
+ attn_output = attn_output.transpose(1, 2).contiguous()
201
+
202
+ return attn_output, attn_weights
203
+
204
+
205
+ class PanguEmbeddedAttention(nn.Module):
206
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
207
+
208
+ def __init__(self, config: PanguEmbeddedConfig, layer_idx: int):
209
+ super().__init__()
210
+ self.config = config
211
+ self.layer_idx = layer_idx
212
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
213
+ self.num_heads = config.num_attention_heads
214
+ self.num_key_value_heads = config.num_key_value_heads
215
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
216
+ self.scaling = self.head_dim**-0.5
217
+ self.attention_dropout = config.attention_dropout
218
+ self.is_causal = True
219
+
220
+ self.q_proj = nn.Linear(config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.bias)
221
+ self.k_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.bias)
222
+ self.v_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.bias)
223
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.bias)
224
+
225
+ def forward(
226
+ self,
227
+ hidden_states: torch.Tensor,
228
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
229
+ attention_mask: Optional[torch.Tensor],
230
+ past_key_value: Optional[Cache] = None,
231
+ cache_position: Optional[torch.LongTensor] = None,
232
+ **kwargs: Unpack[FlashAttentionKwargs],
233
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
234
+ input_shape = hidden_states.shape[:-1]
235
+ hidden_shape = (*input_shape, -1, self.head_dim)
236
+
237
+ query_states = self.q_proj(hidden_states).view(hidden_shape).transpose(1, 2)
238
+ key_states = self.k_proj(hidden_states).view(hidden_shape).transpose(1, 2)
239
+ value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
240
+
241
+ cos, sin = position_embeddings
242
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
243
+
244
+ if past_key_value is not None:
245
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
246
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
247
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
248
+
249
+ attention_interface: Callable = eager_attention_forward
250
+ if self.config._attn_implementation != "eager":
251
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
252
+
253
+ if not self.training and NPU_ATTN_INFR:
254
+ q_len = input_shape[1]
255
+ if attention_mask is not None:
256
+ attention_mask = ~attention_mask.bool()
257
+ elif q_len > 1:
258
+ attention_mask = torch.triu(torch.ones([q_len, q_len]), diagonal=1).bool().unsqueeze(0).unsqueeze(0).to(query_states.device)
259
+
260
+ attn_output, _ = torch_npu.npu_fused_infer_attention_score(
261
+ query_states, key_states, value_states,
262
+ num_heads=self.num_heads, num_key_value_heads=self.num_key_value_heads,
263
+ input_layout="BNSD", atten_mask=attention_mask, scale=self.scaling)
264
+ attn_output = attn_output.transpose(1, 2)
265
+ attn_weights = None
266
+ else:
267
+ attn_output, attn_weights = attention_interface(
268
+ self,
269
+ query_states,
270
+ key_states,
271
+ value_states,
272
+ attention_mask,
273
+ dropout=0.0 if not self.training else self.attention_dropout,
274
+ scaling=self.scaling,
275
+ **kwargs,
276
+ )
277
+
278
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
279
+ attn_output = self.o_proj(attn_output)
280
+ return attn_output, attn_weights
281
+
282
+
283
+ class PanguEmbeddedDecoderLayer(GradientCheckpointingLayer):
284
+ def __init__(self, config: PanguEmbeddedConfig, layer_idx: int):
285
+ super().__init__()
286
+ self.hidden_size = config.hidden_size
287
+ self.self_attn = PanguEmbeddedAttention(config=config, layer_idx=layer_idx)
288
+ self.mlp = PanguEmbeddedMLP(config)
289
+ self.input_layernorm = PanguEmbeddedRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
290
+ self.post_attention_layernorm = PanguEmbeddedRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
291
+
292
+ def forward(
293
+ self,
294
+ hidden_states: torch.Tensor,
295
+ attention_mask: Optional[torch.Tensor] = None,
296
+ position_ids: Optional[torch.LongTensor] = None,
297
+ past_key_value: Optional[Cache] = None,
298
+ output_attentions: Optional[bool] = False,
299
+ use_cache: Optional[bool] = False,
300
+ cache_position: Optional[torch.LongTensor] = None,
301
+ position_embeddings: Optional[tuple[torch.Tensor, torch.Tensor]] = None, # necessary, but kept here for BC
302
+ **kwargs: Unpack[FlashAttentionKwargs],
303
+ ) -> tuple[torch.FloatTensor, Optional[tuple[torch.FloatTensor, torch.FloatTensor]]]:
304
+ residual = hidden_states
305
+ hidden_states = self.input_layernorm(hidden_states)
306
+
307
+ # Self Attention
308
+ hidden_states, self_attn_weights = self.self_attn(
309
+ hidden_states=hidden_states,
310
+ attention_mask=attention_mask,
311
+ position_ids=position_ids,
312
+ past_key_value=past_key_value,
313
+ output_attentions=output_attentions,
314
+ use_cache=use_cache,
315
+ cache_position=cache_position,
316
+ position_embeddings=position_embeddings,
317
+ **kwargs,
318
+ )
319
+ hidden_states = residual + hidden_states
320
+
321
+ # Fully Connected
322
+ residual = hidden_states
323
+ hidden_states = self.post_attention_layernorm(hidden_states)
324
+ hidden_states = self.mlp(hidden_states)
325
+ hidden_states = residual + hidden_states
326
+
327
+ outputs = (hidden_states,)
328
+ if output_attentions:
329
+ outputs += (self_attn_weights,)
330
+
331
+ return outputs
332
+
333
+
334
+ @auto_docstring
335
+ class PanguEmbeddedPreTrainedModel(PreTrainedModel):
336
+ config_class = PanguEmbeddedConfig
337
+ base_model_prefix = "model"
338
+ supports_gradient_checkpointing = True
339
+ _no_split_modules = ["PanguEmbeddedDecoderLayer"]
340
+ _skip_keys_device_placement = ["past_key_values"]
341
+ _supports_flash_attn_3 = True
342
+ _supports_flash_attn_2 = True
343
+ _supports_sdpa = True
344
+ _supports_flex_attn = True
345
+ _supports_cache_class = True
346
+ _supports_quantized_cache = True
347
+ _supports_static_cache = True
348
+ _supports_attention_backend = True
349
+
350
+ def _init_weights(self, module):
351
+ std = self.config.initializer_range
352
+ if isinstance(module, nn.Linear):
353
+ module.weight.data.normal_(mean=0.0, std=std)
354
+ if module.bias is not None:
355
+ module.bias.data.zero_()
356
+ elif isinstance(module, nn.Embedding):
357
+ module.weight.data.normal_(mean=0.0, std=std)
358
+ if module.padding_idx is not None:
359
+ module.weight.data[module.padding_idx].zero_()
360
+ elif isinstance(module, PanguEmbeddedRMSNorm):
361
+ module.weight.data.fill_(1.0)
362
+
363
+
364
+ @auto_docstring
365
+ class PanguEmbeddedModel(PanguEmbeddedPreTrainedModel):
366
+ def __init__(self, config: PanguEmbeddedConfig):
367
+ super().__init__(config)
368
+ self.padding_idx = config.pad_token_id
369
+ self.vocab_size = config.vocab_size
370
+
371
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
372
+ self.layers = nn.ModuleList(
373
+ [PanguEmbeddedDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
374
+ )
375
+ self.norm = PanguEmbeddedRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
376
+ self.rotary_emb = PanguEmbeddedRotaryEmbedding(config=config)
377
+ self.gradient_checkpointing = False
378
+
379
+ # Initialize weights and apply final processing
380
+ self.post_init()
381
+
382
+ def get_input_embeddings(self):
383
+ return self.embed_tokens
384
+
385
+ def set_input_embeddings(self, value):
386
+ self.embed_tokens = value
387
+
388
+ @can_return_tuple
389
+ @auto_docstring
390
+ def forward(
391
+ self,
392
+ input_ids: Optional[torch.LongTensor] = None,
393
+ attention_mask: Optional[torch.Tensor] = None,
394
+ position_ids: Optional[torch.LongTensor] = None,
395
+ past_key_values: Optional[Cache] = None,
396
+ inputs_embeds: Optional[torch.FloatTensor] = None,
397
+ use_cache: Optional[bool] = None,
398
+ output_attentions: Optional[bool] = None,
399
+ output_hidden_states: Optional[bool] = None,
400
+ cache_position: Optional[torch.LongTensor] = None,
401
+ **flash_attn_kwargs: Unpack[FlashAttentionKwargs],
402
+ ) -> BaseModelOutputWithPast:
403
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
404
+ output_hidden_states = (
405
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
406
+ )
407
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
408
+
409
+ if (input_ids is None) ^ (inputs_embeds is not None):
410
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
411
+
412
+ if self.gradient_checkpointing and self.training and use_cache:
413
+ logger.warning_once(
414
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
415
+ )
416
+ use_cache = False
417
+
418
+ # TODO (joao): remove this exception in v4.56 -- it exists for users that try to pass a legacy cache
419
+ if not isinstance(past_key_values, (type(None), Cache)):
420
+ raise ValueError("The `past_key_values` should be either a `Cache` object or `None`.")
421
+
422
+ if inputs_embeds is None:
423
+ inputs_embeds = self.embed_tokens(input_ids)
424
+
425
+ if use_cache and past_key_values is None:
426
+ past_key_values = DynamicCache()
427
+
428
+ if cache_position is None:
429
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
430
+ cache_position = torch.arange(
431
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
432
+ )
433
+
434
+ if position_ids is None:
435
+ position_ids = cache_position.unsqueeze(0)
436
+
437
+ causal_mask = create_causal_mask(
438
+ config=self.config,
439
+ input_embeds=inputs_embeds,
440
+ attention_mask=attention_mask,
441
+ cache_position=cache_position,
442
+ past_key_values=past_key_values,
443
+ position_ids=position_ids,
444
+ )
445
+
446
+ hidden_states = inputs_embeds
447
+
448
+ # create position embeddings to be shared across the decoder layers
449
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
450
+
451
+ # decoder layers
452
+ all_hidden_states = () if output_hidden_states else None
453
+ all_self_attns = () if output_attentions else None
454
+
455
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
456
+ if output_hidden_states:
457
+ all_hidden_states += (hidden_states,)
458
+
459
+ layer_outputs = decoder_layer(
460
+ hidden_states,
461
+ attention_mask=causal_mask,
462
+ position_ids=position_ids,
463
+ past_key_value=past_key_values,
464
+ output_attentions=output_attentions,
465
+ use_cache=use_cache,
466
+ cache_position=cache_position,
467
+ position_embeddings=position_embeddings,
468
+ **flash_attn_kwargs,
469
+ )
470
+
471
+ hidden_states = layer_outputs[0]
472
+
473
+ if output_attentions:
474
+ all_self_attns += (layer_outputs[1],)
475
+
476
+ hidden_states = self.norm(hidden_states)
477
+
478
+ # add hidden states from the last decoder layer
479
+ if output_hidden_states:
480
+ all_hidden_states += (hidden_states,)
481
+
482
+ return BaseModelOutputWithPast(
483
+ last_hidden_state=hidden_states,
484
+ past_key_values=past_key_values if use_cache else None,
485
+ hidden_states=all_hidden_states,
486
+ attentions=all_self_attns,
487
+ )
488
+
489
+
490
+ class KwargsForCausalLM(FlashAttentionKwargs, LossKwargs): ...
491
+
492
+
493
+ @auto_docstring
494
+ class PanguEmbeddedForCausalLM(PanguEmbeddedPreTrainedModel, GenerationMixin):
495
+ _tied_weights_keys = ["lm_head.weight"]
496
+ _tp_plan = {"lm_head": "colwise_rep"}
497
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
498
+
499
+ def __init__(self, config):
500
+ super().__init__(config)
501
+ self.model = PanguEmbeddedModel(config)
502
+ self.vocab_size = config.vocab_size
503
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
504
+
505
+ # Initialize weights and apply final processing
506
+ self.post_init()
507
+
508
+ def get_input_embeddings(self):
509
+ return self.model.embed_tokens
510
+
511
+ def set_input_embeddings(self, value):
512
+ self.model.embed_tokens = value
513
+
514
+ def get_output_embeddings(self):
515
+ return self.lm_head
516
+
517
+ def set_output_embeddings(self, new_embeddings):
518
+ self.lm_head = new_embeddings
519
+
520
+ def set_decoder(self, decoder):
521
+ self.model = decoder
522
+
523
+ def get_decoder(self):
524
+ return self.model
525
+
526
+ @can_return_tuple
527
+ @auto_docstring
528
+ def forward(
529
+ self,
530
+ input_ids: Optional[torch.LongTensor] = None,
531
+ attention_mask: Optional[torch.Tensor] = None,
532
+ position_ids: Optional[torch.LongTensor] = None,
533
+ past_key_values: Optional[Cache] = None,
534
+ inputs_embeds: Optional[torch.FloatTensor] = None,
535
+ labels: Optional[torch.LongTensor] = None,
536
+ use_cache: Optional[bool] = None,
537
+ output_attentions: Optional[bool] = None,
538
+ output_hidden_states: Optional[bool] = None,
539
+ cache_position: Optional[torch.LongTensor] = None,
540
+ logits_to_keep: Union[int, torch.Tensor] = 0,
541
+ **kwargs: Unpack[KwargsForCausalLM],
542
+ ) -> CausalLMOutputWithPast:
543
+
544
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
545
+ output_hidden_states = (
546
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
547
+ )
548
+
549
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
550
+ outputs: BaseModelOutputWithPast = self.model(
551
+ input_ids=input_ids,
552
+ attention_mask=attention_mask,
553
+ position_ids=position_ids,
554
+ past_key_values=past_key_values,
555
+ inputs_embeds=inputs_embeds,
556
+ use_cache=use_cache,
557
+ output_attentions=output_attentions,
558
+ output_hidden_states=output_hidden_states,
559
+ cache_position=cache_position,
560
+ **kwargs,
561
+ )
562
+
563
+ hidden_states = outputs.last_hidden_state
564
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
565
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
566
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
567
+
568
+ loss = None
569
+ if labels is not None:
570
+ loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs)
571
+
572
+ return CausalLMOutputWithPast(
573
+ loss=loss,
574
+ logits=logits,
575
+ past_key_values=outputs.past_key_values,
576
+ hidden_states=outputs.hidden_states,
577
+ attentions=outputs.attentions,
578
+ )
579
+
580
+
581
+ __all__ = [
582
+ "PanguEmbeddedForCausalLM",
583
+ "PanguEmbeddedModel",
584
+ "PanguEmbeddedPreTrainedModel",
585
+ ]
modular_openpangu_dense.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
6
+ # and OPT implementations in this library. It has been modified from its
7
+ # original forms to accommodate minor architectural differences compared
8
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+ from typing import Callable, Optional, Tuple
23
+
24
+ import torch
25
+ from torch import nn
26
+
27
+ import torch_npu
28
+ from torch_npu.contrib import transfer_to_npu
29
+ if "910" in torch.npu.get_device_name():
30
+ NPU_ATTN_INFR = True
31
+ print("[INFO] torch_npu detected. Using NPU fused infer attention.")
32
+ else:
33
+ NPU_ATTN_INFR = False
34
+
35
+ from transformers.cache_utils import Cache
36
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
37
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS
38
+ from transformers.processing_utils import Unpack
39
+ from transformers.utils import logging
40
+ from transformers.models.llama.modeling_llama import (
41
+ LlamaAttention,
42
+ LlamaDecoderLayer,
43
+ LlamaForCausalLM,
44
+ LlamaForSequenceClassification,
45
+ LlamaMLP,
46
+ LlamaModel,
47
+ apply_rotary_pos_emb,
48
+ eager_attention_forward,
49
+ )
50
+ from .configuration_openpangu_dense import PanguEmbeddedConfig
51
+
52
+
53
+ logger = logging.get_logger(__name__)
54
+
55
+
56
+ class PanguEmbeddedMLP(LlamaMLP):
57
+ def __init__(self, config):
58
+ super().__init__(config)
59
+ self.gate_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
60
+ self.up_proj = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
61
+ self.down_proj = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
62
+
63
+
64
+ class PanguEmbeddedAttention(LlamaAttention):
65
+ def __init__(self, config: PanguEmbeddedConfig, layer_idx: int):
66
+ super().__init__()
67
+ self.config = config
68
+ self.layer_idx = layer_idx
69
+ self.head_dim = getattr(config, "head_dim", config.hidden_size // config.num_attention_heads)
70
+ self.num_heads = config.num_attention_heads
71
+ self.num_key_value_heads = config.num_key_value_heads
72
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
73
+ self.scaling = self.head_dim**-0.5
74
+ self.attention_dropout = config.attention_dropout
75
+ self.is_causal = True
76
+
77
+ self.q_proj = nn.Linear(config.hidden_size, config.num_attention_heads * self.head_dim, bias=config.bias)
78
+ self.k_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.bias)
79
+ self.v_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=config.bias)
80
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=config.bias)
81
+
82
+ def forward(
83
+ self,
84
+ hidden_states: torch.Tensor,
85
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
86
+ attention_mask: Optional[torch.Tensor],
87
+ past_key_value: Optional[Cache] = None,
88
+ cache_position: Optional[torch.LongTensor] = None,
89
+ **kwargs: Unpack[FlashAttentionKwargs],
90
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:
91
+ input_shape = hidden_states.shape[:-1]
92
+ hidden_shape = (*input_shape, -1, self.head_dim)
93
+
94
+ query_states = self.q_proj(hidden_states).view(hidden_shape).transpose(1, 2)
95
+ key_states = self.k_proj(hidden_states).view(hidden_shape).transpose(1, 2)
96
+ value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
97
+
98
+ cos, sin = position_embeddings
99
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
100
+
101
+ if past_key_value is not None:
102
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
103
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
104
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
105
+
106
+ attention_interface: Callable = eager_attention_forward
107
+ if self.config._attn_implementation != "eager":
108
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
109
+
110
+ if not self.training and NPU_ATTN_INFR:
111
+ q_len = input_shape[1]
112
+ if attention_mask is not None:
113
+ attention_mask = ~attention_mask.bool()
114
+ elif q_len > 1:
115
+ attention_mask = torch.triu(torch.ones([q_len, q_len]), diagonal=1).bool().unsqueeze(0).unsqueeze(0).to(query_states.device)
116
+
117
+ attn_output, _ = torch_npu.npu_fused_infer_attention_score(
118
+ query_states, key_states, value_states,
119
+ num_heads=self.num_heads, num_key_value_heads=self.num_key_value_heads,
120
+ input_layout="BNSD", atten_mask=attention_mask, scale=self.scaling)
121
+ attn_output = attn_output.transpose(1, 2)
122
+ attn_weights = None
123
+ else:
124
+ attn_output, attn_weights = attention_interface(
125
+ self,
126
+ query_states,
127
+ key_states,
128
+ value_states,
129
+ attention_mask,
130
+ dropout=0.0 if not self.training else self.attention_dropout,
131
+ scaling=self.scaling,
132
+ **kwargs,
133
+ )
134
+
135
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
136
+ attn_output = self.o_proj(attn_output)
137
+ return attn_output, attn_weights
138
+
139
+
140
+ class PanguEmbeddedDecoderLayer(LlamaDecoderLayer):
141
+ pass
142
+
143
+
144
+ class PanguEmbeddedModel(LlamaModel):
145
+ pass
146
+
147
+
148
+ class PanguEmbeddedForCausalLM(LlamaForCausalLM):
149
+ pass
special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "[unused10]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<unk>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
tokenization_openpangu.py ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) 2025 Huawei Technologies Co., Ltd. All rights reserved.
3
+ # Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX
6
+ # and OPT implementations in this library. It has been modified from its
7
+ # original forms to accommodate minor architectural differences compared
8
+ # to GPT-NeoX and OPT used by the Meta AI team that trained the model.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+ import os
23
+ from shutil import copyfile
24
+ from typing import Any, Dict, List, Optional, Tuple
25
+
26
+ import sentencepiece as spm
27
+
28
+ from transformers.tokenization_utils import PreTrainedTokenizer
29
+ from transformers.utils import logging
30
+
31
+
32
+ logger = logging.get_logger(__name__)
33
+
34
+ VOCAB_FILES_NAMES = {"vocab_file": "./tokenizer.model"}
35
+
36
+ PRETRAINED_VOCAB_FILES_MAP = {}
37
+
38
+
39
+ def convert_bool(string):
40
+ if isinstance(string, str):
41
+ if string.lower() == "true":
42
+ return True
43
+ elif string.lower() == "false":
44
+ return False
45
+ else:
46
+ return string
47
+ else:
48
+ return string
49
+
50
+
51
+ class PanguTokenizer(PreTrainedTokenizer):
52
+ """
53
+ Construct a tokenizer. Based on byte-level Byte-Pair-Encoding.
54
+
55
+ Args:
56
+ vocab_file (`str`):
57
+ Path to the vocabulary file.
58
+ """
59
+
60
+ vocab_files_names = VOCAB_FILES_NAMES
61
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
62
+ model_input_names = ["input_ids", "attention_mask"]
63
+ _auto_class = "AutoTokenizer"
64
+
65
+ def __init__(
66
+ self,
67
+ vocab_file,
68
+ unk_token="<unk>",
69
+ bos_token="<s>",
70
+ eos_token="</s>",
71
+ pad_token="</s>",
72
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
73
+ add_bos_token=True,
74
+ add_eos_token=False,
75
+ decode_with_prefix_space=False,
76
+ clean_up_tokenization_spaces=False,
77
+ **kwargs,
78
+ ):
79
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
80
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
81
+ self.sp_model.Load(vocab_file)
82
+ super().__init__(
83
+ bos_token=bos_token,
84
+ eos_token=eos_token,
85
+ unk_token=unk_token,
86
+ pad_token=pad_token,
87
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
88
+ **kwargs,
89
+ )
90
+ self.vocab_file = vocab_file
91
+ self.add_bos_token = convert_bool(add_bos_token)
92
+ self.add_eos_token = add_eos_token
93
+ self.decode_with_prefix_space = decode_with_prefix_space
94
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
95
+ self.sp_model.Load(vocab_file)
96
+ self._no_prefix_space_tokens = None
97
+
98
+ """ Initialisation"""
99
+
100
+ @property
101
+ def no_prefix_space_tokens(self):
102
+ if self._no_prefix_space_tokens is None:
103
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
104
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith("▁")}
105
+ return self._no_prefix_space_tokens
106
+
107
+ @property
108
+ def vocab_size(self):
109
+ """Returns vocab size"""
110
+ return self.sp_model.get_piece_size()
111
+
112
+ @property
113
+ def bos_token_id(self) -> Optional[int]:
114
+ return self.sp_model.bos_id()
115
+
116
+ @property
117
+ def eos_token_id(self) -> Optional[int]:
118
+ return super().eos_token_id
119
+
120
+ def get_vocab(self):
121
+ """Returns vocab as a dict"""
122
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
123
+ vocab.update(self.added_tokens_encoder)
124
+ return vocab
125
+
126
+ def _tokenize(self, text):
127
+ """Returns a tokenized string."""
128
+ return self.sp_model.encode(text, out_type=str)
129
+
130
+ def _convert_token_to_id(self, token):
131
+ """Converts a token (str) in an id using the vocab."""
132
+ return self.sp_model.piece_to_id(token)
133
+
134
+ def _convert_id_to_token(self, index):
135
+ """Converts an index (integer) in a token (str) using the vocab."""
136
+ token = self.sp_model.IdToPiece(index)
137
+ return token
138
+
139
+ def _maybe_add_prefix_space(self, tokens, decoded):
140
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
141
+ return " " + decoded
142
+ else:
143
+ return decoded
144
+
145
+ def convert_tokens_to_string(self, tokens):
146
+ """Converts a sequence of tokens (string) in a single string."""
147
+ current_sub_tokens = []
148
+ out_string = ""
149
+ prev_is_special = False
150
+ for token in tokens:
151
+ # make sure that special tokens are not decoded using sentencepiece model
152
+ if token in self.all_special_tokens:
153
+ # Decode the current sub-tokens first
154
+ if current_sub_tokens:
155
+ out_string += self.sp_model.decode(current_sub_tokens)
156
+ current_sub_tokens = []
157
+ # Append the special token without adding extra spaces
158
+ out_string += token
159
+ prev_is_special = True
160
+ else:
161
+ current_sub_tokens.append(token)
162
+ prev_is_special = False
163
+ # Decode any remaining sub-tokens
164
+ if current_sub_tokens:
165
+ out_string += self.sp_model.decode(current_sub_tokens)
166
+ # Clean up leading and trailing spaces
167
+ if self.clean_up_tokenization_spaces:
168
+ out_string = self.clean_up_tokenization(out_string)
169
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
170
+ return out_string[1:]
171
+
172
+ # Override decode to set spaces_between_special_tokens to True as default
173
+ def decode(self,
174
+ token_ids,
175
+ spaces_between_special_tokens: bool = False,
176
+ **kwargs):
177
+ return super().decode(
178
+ token_ids=token_ids,
179
+ spaces_between_special_tokens=spaces_between_special_tokens,
180
+ **kwargs,
181
+ )
182
+
183
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
184
+ """
185
+ Save the vocabulary and special tokens file to a directory.
186
+
187
+ Args:
188
+ save_directory (`str`):
189
+ The directory in which to save the vocabulary.
190
+
191
+ Returns:
192
+ `Tuple(str)`: Paths to the files saved.
193
+ """
194
+ if not os.path.isdir(save_directory):
195
+ logger.error(f"Vocabulary path ({save_directory}) should be a directory")
196
+ return ("",)
197
+ out_vocab_file = os.path.join(
198
+ save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"]
199
+ )
200
+
201
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
202
+ copyfile(self.vocab_file, out_vocab_file)
203
+ elif not os.path.isfile(self.vocab_file):
204
+ with open(out_vocab_file, "wb") as fi:
205
+ content_spiece_model = self.sp_model.serialized_model_proto()
206
+ fi.write(content_spiece_model)
207
+
208
+ return (out_vocab_file,)
209
+
210
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
211
+ if self.add_bos_token:
212
+ bos_token_ids = [self.bos_token_id]
213
+ else:
214
+ bos_token_ids = []
215
+
216
+ output = bos_token_ids + token_ids_0
217
+
218
+ if token_ids_1 is not None:
219
+ output = output + token_ids_1
220
+
221
+ if self.add_eos_token:
222
+ output = output + [self.eos_token_id]
223
+
224
+ return output
225
+
226
+ def get_special_tokens_mask(
227
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
228
+ ) -> List[int]:
229
+ """
230
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
231
+ special tokens using the tokenizer `prepare_for_model` method.
232
+
233
+ Args:
234
+ token_ids_0 (`List[int]`):
235
+ List of IDs.
236
+ token_ids_1 (`List[int]`, *optional*):
237
+ Optional second list of IDs for sequence pairs.
238
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
239
+ Whether or not the token list is already formatted with special tokens for the model.
240
+
241
+ Returns:
242
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
243
+ """
244
+ if already_has_special_tokens:
245
+ return super().get_special_tokens_mask(
246
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
247
+ )
248
+
249
+ if token_ids_1 is None:
250
+ return [1] + ([0] * len(token_ids_0)) + [1]
251
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
252
+
253
+ def create_token_type_ids_from_sequences(
254
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
255
+ ) -> List[int]:
256
+ """
257
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
258
+ use of token type ids, therefore a list of zeros is returned.
259
+
260
+ Args:
261
+ token_ids_0 (`List[int]`):
262
+ List of IDs.
263
+ token_ids_1 (`List[int]`, *optional*):
264
+ Optional second list of IDs for sequence pairs.
265
+
266
+ Returns:
267
+ `List[int]`: List of zeros.
268
+ """
269
+ eos = [self.eos_token_id]
270
+
271
+ if token_ids_1 is None:
272
+ return len(token_ids_0 + eos) * [0]
273
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6b16f1558c0cd4ae6ef1a2c605713be0a514f50e1ce2d2c878979ce988c148ec
3
+ size 2477809
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"add_bos_token": true, "add_eos_token": false, "add_prefix_space": true, "added_tokens_decoder": {"0": {"content": "<unk>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "1": {"content": "<s>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "2": {"content": "</s>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45806": {"content": "<|User|>:", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45813": {"content": "<|Bot|>:", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45830": {"content": "[unused0]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45840": {"content": "[unused1]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45846": {"content": "[unused2]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45849": {"content": "[unused3]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45861": {"content": "[unused4]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45866": {"content": "[unused5]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45874": {"content": "[unused6]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45883": {"content": "[unused7]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45884": {"content": "[unused8]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45887": {"content": "[unused9]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45892": {"content": "[unused10]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45920": {"content": "[unused11]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45932": {"content": "[unused12]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45938": {"content": "[unused13]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45953": {"content": "[unused14]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45968": {"content": "[unused15]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45974": {"content": "[unused16]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45982": {"content": "[unused17]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "45986": {"content": "[unused18]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46005": {"content": "[unused19]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46007": {"content": "[unused20]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46014": {"content": "[unused21]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46017": {"content": "[unused22]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46028": {"content": "[unused23]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46032": {"content": "[unused24]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46081": {"content": "[unused25]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46086": {"content": "[unused26]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46101": {"content": "[unused27]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46183": {"content": "[unused28]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46230": {"content": "[unused29]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46245": {"content": "[unused30]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "46257": {"content": "[unused31]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "144208": {"content": "[unused32]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}, "144209": {"content": "[unused33]", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false, "special": true}}, "auto_map": {"AutoTokenizer": ["tokenization_openpangu.PanguTokenizer", null]}, "bos_token": "<s>", "clean_up_tokenization_spaces": false, "eos_token": "[unused10]", "legacy": true, "model_max_length": 1000000000000000019884624838656, "pad_token": "<unk>", "sp_model_kwargs": {}, "spaces_between_special_tokens": false, "tokenizer_class": "PanguTokenizer", "unk_token": "<unk>", "use_default_system_prompt": false, "chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '[unused9]系统:[unused10]' }}{% endif %}{% if message['role'] == 'system' %}{{ '[unused9]系统:' + message['content'] + '[unused10]' }}{% endif %}{% if message['role'] == 'assistant' %}{{'[unused9]助手:' + message['content'] + '[unused10]'}}{% endif %}{% if message['role'] == 'tool' %}{{'[unused9]工具:' + message['content'] + '[unused10]'}}{% endif %}{% if message['role'] == 'function' %}{{'[unused9]方法:' + message['content'] + '[unused10]'}}{% endif %}{% if message['role'] == 'user' %}{{'[unused9]用户:' + message['content'] + '[unused10]'}}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '[unused9]助手:' }}{% endif %}"}