manaestras commited on
Commit
37e090c
·
verified ·
1 Parent(s): 30ae141

Upload 4 files

Browse files
Files changed (4) hide show
  1. LICENSE.txt +77 -0
  2. Notice.txt +160 -0
  3. README.md +503 -5
  4. README_CN.md +748 -0
LICENSE.txt ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT
2
+ Tencent Hunyuan 7B Release Date: August 4, 2025
3
+ THIS LICENSE AGREEMENT DOES NOT APPLY IN THE EUROPEAN UNION, UNITED KINGDOM AND SOUTH KOREA AND IS EXPRESSLY LIMITED TO THE TERRITORY, AS DEFINED BELOW.
4
+ By clicking to agree or by using, reproducing, modifying, distributing, performing or displaying any portion or element of the Tencent Hunyuan Works, including via any Hosted Service, You will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
5
+ 1. DEFINITIONS.
6
+ a. “Acceptable Use Policy” shall mean the policy made available by Tencent as set forth in the Exhibit A.
7
+ b. “Agreement” shall mean the terms and conditions for use, reproduction, distribution, modification, performance and displaying of Tencent Hunyuan Works or any portion or element thereof set forth herein.
8
+ c. “Documentation” shall mean the specifications, manuals and documentation for Tencent Hunyuan made publicly available by Tencent.
9
+ d. “Hosted Service” shall mean a hosted service offered via an application programming interface (API), web access, or any other electronic or remote means.
10
+ e. “Licensee,” “You” or “Your” shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Tencent Hunyuan Works for any purpose and in any field of use.
11
+ f. “Materials” shall mean, collectively, Tencent’s proprietary Tencent Hunyuan and Documentation (and any portion thereof) as made available by Tencent under this Agreement.
12
+ g. “Model Derivatives” shall mean all: (i) modifications to Tencent Hunyuan or any Model Derivative of Tencent Hunyuan; (ii) works based on Tencent Hunyuan or any Model Derivative of Tencent Hunyuan; or (iii) any other machine learning model which is created by transfer of patterns of the weights, parameters, operations, or Output of Tencent Hunyuan or any Model Derivative of Tencent Hunyuan, to that model in order to cause that model to perform similarly to Tencent Hunyuan or a Model Derivative of Tencent Hunyuan, including distillation methods, methods that use intermediate data representations, or methods based on the generation of synthetic data Outputs by Tencent Hunyuan or a Model Derivative of Tencent Hunyuan for training that model. For clarity, Outputs by themselves are not deemed Model Derivatives.
13
+ h. “Output” shall mean the information and/or content output of Tencent Hunyuan or a Model Derivative that results from operating or otherwise using Tencent Hunyuan or a Model Derivative, including via a Hosted Service.
14
+ i. “Tencent,” “We” or “Us” shall mean the applicable entity or entities in the Tencent corporate family that own(s) intellectual property or other rights embodied in or utilized by the Materials.
15
+ j. “Tencent Hunyuan” shall mean the large language models, text/image/video/audio/3D generation models, and multimodal large language models and their software and algorithms, including trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing made publicly available by Us, including, without limitation to, Tencent Hunyuan 7B released at [https://github.com/Tencent-Hunyuan/Hunyuan-7B].
16
+ k. “Tencent Hunyuan Works” shall mean: (i) the Materials; (ii) Model Derivatives; and (iii) all derivative works thereof.
17
+ l. “Territory” shall mean the worldwide territory, excluding the territory of the European Union, United Kingdom and South Korea.
18
+ m. “Third Party” or “Third Parties” shall mean individuals or legal entities that are not under common control with Us or You.
19
+ n. “including” shall mean including but not limited to.
20
+ 2. GRANT OF RIGHTS.
21
+ We grant You, for the Territory only, a non-exclusive, non-transferable and royalty-free limited license under Tencent’s intellectual property or other rights owned by Us embodied in or utilized by the Materials to use, reproduce, distribute, create derivative works of (including Model Derivatives), and make modifications to the Materials, only in accordance with the terms of this Agreement and the Acceptable Use Policy, and You must not violate (or encourage or permit anyone else to violate) any term of this Agreement or the Acceptable Use Policy.
22
+ 3. DISTRIBUTION.
23
+ You may, subject to Your compliance with this Agreement, distribute or make available to Third Parties the Tencent Hunyuan Works, exclusively in the Territory, provided that You meet all of the following conditions:
24
+ a. You must provide all such Third Party recipients of the Tencent Hunyuan Works or products or services using them a copy of this Agreement;
25
+ b. You must cause any modified files to carry prominent notices stating that You changed the files;
26
+ c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent Hunyuan Works; and (ii) mark the products or services developed by using the Tencent Hunyuan Works to indicate that the product/service is “Powered by Tencent Hunyuan”; and
27
+ d. All distributions to Third Parties (other than through a Hosted Service) must be accompanied by a “Notice” text file that contains the following notice: “Tencent Hunyuan is licensed under the Tencent Hunyuan Community License Agreement, Copyright © 2025 Tencent. All Rights Reserved. The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate.”
28
+ You may add Your own copyright statement to Your modifications and, except as set forth in this Section and in Section 5, may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Model Derivatives as a whole, provided Your use, reproduction, modification, distribution, performance and display of the work otherwise complies with the terms and conditions of this Agreement (including as regards the Territory). If You receive Tencent Hunyuan Works from a Licensee as part of an integrated end user product, then this Section 3 of this Agreement will not apply to You.
29
+ 4. ADDITIONAL COMMERCIAL TERMS.
30
+ If, on the Tencent Hunyuan version release date, the monthly active users of all products or services made available by or for Licensee is greater than 100 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.
31
+ 5. RULES OF USE.
32
+ a. Your use of the Tencent Hunyuan Works must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Tencent Hunyuan Works, which is hereby incorporated by reference into this Agreement. You must include the use restrictions referenced in these Sections 5(a) and 5(b) as an enforceable provision in any agreement (e.g., license agreement, terms of use, etc.) governing the use and/or distribution of Tencent Hunyuan Works and You must provide notice to subsequent users to whom You distribute that Tencent Hunyuan Works are subject to the use restrictions in these Sections 5(a) and 5(b).
33
+ b. You must not use the Tencent Hunyuan Works or any Output or results of the Tencent Hunyuan Works to improve any other AI model (other than Tencent Hunyuan or Model Derivatives thereof).
34
+ c. You must not use, reproduce, modify, distribute, or display the Tencent Hunyuan Works, Output or results of the Tencent Hunyuan Works outside the Territory. Any such use outside the Territory is unlicensed and unauthorized under this Agreement.
35
+ 6. INTELLECTUAL PROPERTY.
36
+ a. Subject to Tencent’s ownership of Tencent Hunyuan Works made by or for Tencent and intellectual property rights therein, conditioned upon Your compliance with the terms and conditions of this Agreement, as between You and Tencent, You will be the owner of any derivative works and modifications of the Materials and any Model Derivatives that are made by or for You.
37
+ b. No trademark licenses are granted under this Agreement, and in connection with the Tencent Hunyuan Works, Licensee may not use any name or mark owned by or associated with Tencent or any of its affiliates, except as required for reasonable and customary use in describing and distributing the Tencent Hunyuan Works. Tencent hereby grants You a license to use “Tencent Hunyuan” (the “Mark”) in the Territory solely as required to comply with the provisions of Section 3(c), provided that You comply with any applicable laws related to trademark protection. All goodwill arising out of Your use of the Mark will inure to the benefit of Tencent.
38
+ c. If You commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against Us or any person or entity alleging that the Materials or any Output, or any portion of any of the foregoing, infringe any intellectual property or other right owned or licensable by You, then all licenses granted to You under this Agreement shall terminate as of the date such lawsuit or other proceeding is filed. You will defend, indemnify and hold harmless Us from and against any claim by any Third Party arising out of or related to Your or the Third Party’s use or distribution of the Tencent Hunyuan Works.
39
+ d. Tencent claims no rights in Outputs You generate. You and Your users are solely responsible for Outputs and their subsequent uses.
40
+ 7. DISCLAIMERS OF WARRANTY AND LIMITATIONS OF LIABILITY.
41
+ a. We are not obligated to support, update, provide training for, or develop any further version of the Tencent Hunyuan Works or to grant any license thereto.
42
+ b. UNLESS AND ONLY TO THE EXTENT REQUIRED BY APPLICABLE LAW, THE TENCENT HUNYUAN WORKS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED “AS IS” WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES OF ANY KIND INCLUDING ANY WARRANTIES OF TITLE, MERCHANTABILITY, NONINFRINGEMENT, COURSE OF DEALING, USAGE OF TRADE, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING, REPRODUCING, MODIFYING, PERFORMING, DISPLAYING OR DISTRIBUTING ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS AND ASSUME ANY AND ALL RISKS ASSOCIATED WITH YOUR OR A THIRD PARTY’S USE OR DISTRIBUTION OF ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS AND YOUR EXERCISE OF RIGHTS AND PERMISSIONS UNDER THIS AGREEMENT.
43
+ c. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL TENCENT OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, FOR ANY DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, CONSEQUENTIAL OR PUNITIVE DAMAGES, OR LOST PROFITS OF ANY KIND ARISING FROM THIS AGREEMENT OR RELATED TO ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS, EVEN IF TENCENT OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
44
+ 8. SURVIVAL AND TERMINATION.
45
+ a. The term of this Agreement shall commence upon Your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
46
+ b. We may terminate this Agreement if You breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, You must promptly delete and cease use of the Tencent Hunyuan Works. Sections 6(a), 6(c), 7 and 9 shall survive the termination of this Agreement.
47
+ 9. GOVERNING LAW AND JURISDICTION.
48
+ a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of the Hong Kong Special Administrative Region of the People’s Republic of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
49
+ b. Exclusive jurisdiction and venue for any dispute arising out of or relating to this Agreement will be a court of competent jurisdiction in the Hong Kong Special Administrative Region of the People’s Republic of China, and Tencent and Licensee consent to the exclusive jurisdiction of such court with respect to any such dispute.
50
+
51
+ EXHIBIT A
52
+ ACCEPTABLE USE POLICY
53
+
54
+ Tencent reserves the right to update this Acceptable Use Policy from time to time.
55
+ Last modified: November 5, 2024
56
+
57
+ Tencent endeavors to promote safe and fair use of its tools and features, including Tencent Hunyuan. You agree not to use Tencent Hunyuan or Model Derivatives:
58
+ 1. Outside the Territory;
59
+ 2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
60
+ 3. To harm Yourself or others;
61
+ 4. To repurpose or distribute output from Tencent Hunyuan or any Model Derivatives to harm Yourself or others;
62
+ 5. To override or circumvent the safety guardrails and safeguards We have put in place;
63
+ 6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
64
+ 7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
65
+ 8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
66
+ 9. To intentionally defame, disparage or otherwise harass others;
67
+ 10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
68
+ 11. To generate or disseminate personal identifiable information with the purpose of harming others;
69
+ 12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including –through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
70
+ 13. To impersonate another individual without consent, authorization, or legal right;
71
+ 14. To make high-stakes automated decisions in domains that affect an individual’s safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
72
+ 15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
73
+ 16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
74
+ 17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
75
+ 18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
76
+ 19. For military purposes;
77
+ 20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
Notice.txt ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Usage and Legal Notices:
2
+
3
+ Tencent is pleased to support the open source community by making Tencent Hunyuan 7B available.
4
+
5
+ Copyright (C) Tencent. All rights reserved. The below software and/or models in this distribution may have been modified by Tencent ("Tencent Modifications"). All Tencent Modifications are Copyright (C) Tencent.
6
+
7
+ Tencent Hunyuan 7B is licensed under TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT, which can be found in this repository called "LICENSE", except for the third-party components listed below. Tencent Hunyuan 7B does not impose any additional limitations beyond what is outlined in the respective licenses of these third-party components. Users must comply with all terms and conditions of original licenses of these third-party components and must ensure that the usage of the third party components adheres to all relevant laws and regulations.
8
+
9
+ For avoidance of doubts, Tencent Hunyuan 7B refers to the inference code, training code, parameters and the weights of Tencent Hunyuan 7B only, which are made publicly available by Tencent in accordance with the TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT.
10
+
11
+
12
+ Other dependencies and licenses:
13
+
14
+
15
+ Open Source Software Licensed under the Apache License Version 2.0:
16
+ The below software in this distribution may have been modified by Tencent ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2025 Tencent.
17
+ --------------------------------------------------------------------
18
+ 1. pytorch
19
+ Copyright 2016-2017 TorchAPI
20
+ Copyright 2016-2017 Contributors
21
+
22
+ 2. VLLM
23
+ Copyright (c) vllm original author and authors
24
+ Please note this software has been modified by Tencent in this distribution.
25
+
26
+ 3. transformers
27
+ Copyright 2018- The Hugging Face team. All rights reserved.
28
+
29
+ 4. accelerate
30
+ Copyright (c) accelerate original author and authors
31
+
32
+
33
+ Terms of the Apache License Version 2.0:
34
+ --------------------------------------------------------------------
35
+ Apache License
36
+
37
+ Version 2.0, January 2004
38
+
39
+ http://www.apache.org/licenses/
40
+
41
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
42
+ 1. Definitions.
43
+
44
+ "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
45
+
46
+ "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
47
+
48
+ "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
49
+
50
+ "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
51
+
52
+ "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
53
+
54
+ "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
55
+
56
+ "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
57
+
58
+ "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
59
+
60
+ "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
63
+
64
+ 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
65
+
66
+ 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
67
+
68
+ 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
69
+
70
+ You must give any other recipients of the Work or Derivative Works a copy of this License; and
71
+
72
+ You must cause any modified files to carry prominent notices stating that You changed the files; and
73
+
74
+ You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
75
+
76
+ If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
77
+
78
+ You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
79
+
80
+ 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
81
+
82
+ 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
83
+
84
+ 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
85
+
86
+ 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
87
+
88
+ 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
89
+
90
+ END OF TERMS AND CONDITIONS
91
+
92
+
93
+
94
+ Open Source Software Licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
95
+ --------------------------------------------------------------------
96
+ 1. pytorch
97
+ Copyright (c) 2016- Facebook, Inc (Adam Paszke)
98
+ Copyright (c) 2014- Facebook, Inc (Soumith Chintala)
99
+ Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
100
+ Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
101
+ Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
102
+ Copyright (c) 2011-2013 NYU (Clement Farabet)
103
+ Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
104
+ Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
105
+ Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
106
+
107
+
108
+ Terms of the BSD 3-Clause:
109
+ --------------------------------------------------------------------
110
+ Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
111
+
112
+ 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
113
+
114
+ 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
115
+
116
+ 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
117
+
118
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
119
+
120
+ For the license of other third party components, please refer to the following URL:
121
+ https://github.com/pytorch/pytorch/blob/v2.1.1/NOTICE
122
+ https://github.com/pytorch/pytorch/tree/v2.1.1/third_party
123
+
124
+
125
+ Open Source Software Licensed under the BSD 3-Clause License:
126
+ --------------------------------------------------------------------
127
+ 1. flash_attn
128
+ Copyright (c) 2022, the respective contributors, as shown by the AUTHORS file.
129
+ All rights reserved.
130
+
131
+
132
+ A copy of the BSD 3-Clause is included in this file.
133
+
134
+
135
+
136
+ Open Source Software Licensed under the Apache License Version 2.0 and Other Licenses of the Third-Party Components therein:
137
+ The below software in this distribution is modified by Tencent ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2025 Tencent.
138
+ --------------------------------------------------------------------
139
+ 1. sglang
140
+ Copyright 2023-2024 SGLang Team
141
+
142
+
143
+ A copy of the Apache 2.0 is included in this file.
144
+
145
+ For the license of other third party components, please refer to the following URL:
146
+ https://github.com/sgl-project/sglang/tree/v0.4.7/3rdparty/amd
147
+
148
+
149
+
150
+ Open Source Software Licensed under the Apache License Version 2.0 and Other Licenses of the Third-Party Components therein:
151
+ The below software in this distribution is modified by Tencent ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2025 Tencent.
152
+ --------------------------------------------------------------------
153
+ 1. TensorRT-LLM
154
+ Copyright (c) TensorRT-LLM original author and authors
155
+
156
+
157
+ A copy of the Apache 2.0 is included in this file.
158
+
159
+ For the license of other third party components, please refer to the following URL:
160
+ https://github.com/NVIDIA/TensorRT-LLM/tree/v0.20.0/3rdparty
README.md CHANGED
@@ -1,5 +1,503 @@
1
- ---
2
- license: other
3
- license_name: tencent-hunyuan
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ <p align="left">
3
+ <a href="README_CN.md">中文</a>&nbsp | English</a>
4
+ </p>
5
+ <br><br>
6
+
7
+ <p align="center">
8
+ <img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
9
+ </p><p></p>
10
+
11
+
12
+ <p align="center">
13
+ 🤗&nbsp;<a href="https://huggingface.co/tencent/"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
14
+ <img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/>&nbsp;<a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
15
+ <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/6594d0c6c5f1cd69a48b261d/04ZNQlAfs08Bfg4B1o3XO.png" width="14"/>&nbsp;<a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
16
+ </p>
17
+
18
+ <p align="center">
19
+ 🖥️&nbsp;<a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
20
+ 🕖&nbsp;<a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
21
+ 🕹️&nbsp;<a href="https://hunyuan.tencent.com/"><b>Demo</b></a>&nbsp;&nbsp;&nbsp;&nbsp;
22
+ </p>
23
+
24
+ <p align="center">
25
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
26
+ <a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
27
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a>
28
+ </p>
29
+
30
+
31
+ ## Model Introduction
32
+
33
+ Hunyuan is Tencent's open-source efficient large language model series, designed for versatile deployment across diverse computational environments. From edge devices to high-concurrency production systems, these models deliver optimal performance with advanced quantization support and ultra-long context capabilities.
34
+
35
+ We have released a series of Hunyuan dense models, comprising both pre-trained and instruction-tuned variants, with parameter scales of 0.5B, 1.8B, 4B, and 7B. These models adopt training strategies similar to the Hunyuan-A13B, thereby inheriting its robust performance characteristics. This comprehensive model family enables flexible deployment optimization - from resource-constrained edge computing with smaller variants to high-throughput production environments with larger models, all while maintaining strong capabilities across diverse scenarios.
36
+
37
+ ### Key Features and Advantages
38
+
39
+ - **Hybrid Reasoning Support**: Supports both fast and slow thinking modes, allowing users to flexibly choose according to their needs.
40
+ - **Ultra-Long Context Understanding**: Natively supports a 256K context window, maintaining stable performance on long-text tasks.
41
+ - **Enhanced Agent Capabilities**: Optimized for agent tasks, achieving leading results on benchmarks such as BFCL-v3, τ-Bench and C3-Bench.
42
+ - **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
43
+
44
+ ## Related News
45
+ * 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-4B-Pretrain** , **Hunyuan-7B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Instruct** on Hugging Face.
46
+ <br>
47
+
48
+
49
+ ## Benchmark
50
+
51
+ Note: The following benchmarks are evaluated by TRT-LLM-backend on several **base models**.
52
+
53
+ | Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
54
+ |:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
55
+ | MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
56
+ | MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
57
+ | MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
58
+ | SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
59
+ | BBH | 45.92 | 74.32 | 75.17 | 82.95 |
60
+ | GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
61
+ | GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
62
+ | MATH | 42.95 | 62.85 | 72.25 | 74.85 |
63
+ | EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
64
+ | MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
65
+ | MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
66
+ | CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
67
+ | Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
68
+ | simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
69
+
70
+
71
+ | Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
72
+ |:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
73
+ | **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
74
+ | **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
75
+ | **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
76
+ | **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
77
+ | **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
78
+ | **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
79
+ | **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
80
+
81
+
82
+ &nbsp;
83
+
84
+ ### Use with transformers
85
+ First, please install transformers. We will merge it into the main branch later.
86
+ ```SHELL
87
+ pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
88
+ ```
89
+ Our model defaults to using slow-thinking reasoning, and there are two ways to disable CoT reasoning.
90
+ 1. Pass **"enable_thinking=False"** when calling apply_chat_template.
91
+ 2. Adding **"/no_think"** before the prompt will force the model not to use perform CoT reasoning. Similarly, adding **"/think"** before the prompt will force the model to perform CoT reasoning.
92
+
93
+ The following code snippet shows how to use the transformers library to load and apply the model. It also demonstrates how to enable and disable the reasoning mode , and how to parse the reasoning process along with the final output.
94
+
95
+ we use tencent/Hunyuan-7B-Instruct for example
96
+
97
+ ```python
98
+ from transformers import AutoModelForCausalLM, AutoTokenizer
99
+ import os
100
+ import re
101
+
102
+ model_name_or_path = "tencent/Hunyuan-7B-Instruct"
103
+
104
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
105
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto") # You may want to use bfloat16 and/or move to GPU here
106
+ messages = [
107
+ {"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
108
+ ]
109
+ tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True,return_tensors="pt",
110
+ enable_thinking=True # Toggle thinking mode (default: True)
111
+ )
112
+
113
+ outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)
114
+
115
+ output_text = tokenizer.decode(outputs[0])
116
+ print("output_text=",output_text)
117
+ think_pattern = r'<think>(.*?)</think>'
118
+ think_matches = re.findall(think_pattern, output_text, re.DOTALL)
119
+
120
+ answer_pattern = r'<answer>(.*?)</answer>'
121
+ answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
122
+
123
+ think_content = [match.strip() for match in think_matches][0]
124
+ answer_content = [match.strip() for match in answer_matches][0]
125
+ print(f"thinking_content:{think_content}\n\n")
126
+ print(f"answer_content:{answer_content}\n\n")
127
+
128
+
129
+ ```
130
+
131
+ We recommend using the following set of parameters for inference. Note that our model does not have the default system_prompt.
132
+
133
+ ```json
134
+
135
+ {
136
+ "do_sample": true,
137
+ "top_k": 20,
138
+ "top_p": 0.8,
139
+ "repetition_penalty": 1.05,
140
+ "temperature": 0.7
141
+ }
142
+ ```
143
+
144
+ &nbsp;
145
+
146
+ ### Training Data Format
147
+
148
+ If you need to fine-tune our Instruct model, we recommend processing the data into the following format, corresponding to both slow-thinking and fast-thinking scenarios.
149
+
150
+ ```python
151
+ # think_pattern
152
+ think = ""
153
+ answer = ""
154
+ think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
155
+
156
+ # fast think pattern
157
+ messages = [
158
+ {"role": "system", "content": "You are a helpful assistant."},
159
+ {"role": "user", "content": "/no_think海水为什么是咸的" },
160
+ {"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
161
+ ]
162
+
163
+ # slow think pattern
164
+ messages = [
165
+ {"role": "system", "content": "You are a helpful assistant."},
166
+ {"role": "user", "content": "1+1=" },
167
+ {"role": "assistant", "content": "<think>\n嗯,用户问���是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
168
+ ]
169
+
170
+ from transformers import AutoTokenizer
171
+ tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
172
+ train_ids = tokenizer.apply_chat_template(messages)
173
+ ```
174
+
175
+ &nbsp;
176
+
177
+ ### Train with LLaMA-Factory
178
+
179
+ In the following chapter, we will introduce how to use `LLaMA-Factory` to fine-tune the `Hunyuan` model.
180
+
181
+ #### Prerequisites
182
+
183
+ Verify installation of the following dependencies:
184
+ - **LLaMA-Factory**: Follow [official installation guide](https://github.com/hiyouga/LLaMA-Factory)
185
+ - **DeepSpeed** (optional): Follow [official installation guide](https://github.com/deepspeedai/DeepSpeed#installation)
186
+ - **Transformer Library**: Use the companion branch (Hunyuan-submitted code is pending review)
187
+ ```
188
+ pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
189
+ ```
190
+
191
+ #### Data preparation
192
+
193
+ We need to prepare a custom dataset:
194
+ 1. Organize your data in `json` format and place it in the `data` directory in `LLaMA-Factory`. The current implementation uses the `sharegpt` dataset format, which requires the following structure:
195
+ ```
196
+ [
197
+ {
198
+ "messages": [
199
+ {
200
+ "role": "system",
201
+ "content": "System prompt (optional)"
202
+ },
203
+ {
204
+ "role": "user",
205
+ "content": "Human instruction"
206
+ },
207
+ {
208
+ "role": "assistant",
209
+ "content": "Model response"
210
+ }
211
+ ]
212
+ }
213
+ ]
214
+ ```
215
+ Refer to the [Data Format](#training-data-format) section mentioned earlier for details.
216
+
217
+ 2. Define your dataset in the data/dataset_info.json file using the following format:
218
+ ```
219
+ "dataset_name": {
220
+ "file_name": "dataset.json",
221
+ "formatting": "sharegpt",
222
+ "columns": {
223
+ "messages": "messages"
224
+ },
225
+ "tags": {
226
+ "role_tag": "role",
227
+ "content_tag": "content",
228
+ "user_tag": "user",
229
+ "assistant_tag": "assistant",
230
+ "system_tag": "system"
231
+ }
232
+ }
233
+ ```
234
+
235
+ #### Training execution
236
+
237
+ 1. Copy all files from the `train/llama_factory_support/example_configs` directory to the `example/hunyuan` directory in `LLaMA-Factory`.
238
+ 2. Modify the model path and dataset name in the configuration file `hunyuan_full.yaml`. Adjust other configurations as needed:
239
+ ```
240
+ ### model
241
+ model_name_or_path: [!!!add the model path here!!!]
242
+
243
+ ### dataset
244
+ dataset: [!!!add the dataset name here!!!]
245
+ ```
246
+ 3. Execute training commands:
247
+ *​​Single-node training​​
248
+ Note: Set the environment variable DISABLE_VERSION_CHECK to 1 to avoid version conflicts.
249
+ ```
250
+ export DISABLE_VERSION_CHECK=1
251
+ llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
252
+ ```
253
+ *Multi-node training​​
254
+ Execute the following command on each node. Configure NNODES, NODE_RANK, MASTER_ADDR, and MASTER_PORT according to your environment:
255
+ ```
256
+ export DISABLE_VERSION_CHECK=1
257
+ FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
258
+ llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
259
+ ```
260
+
261
+ &nbsp;
262
+
263
+
264
+ ## Quantization Compression
265
+ We used our own [AngleSlim](https://github.com/tencent/AngelSlim) compression tool to produce FP8 and INT4 quantization models. `AngleSlim` is a toolset dedicated to creating a more user-friendly, comprehensive and efficient model compression solution.
266
+
267
+ ### FP8 Quantization
268
+ We use FP8-static quantization, FP8 quantization adopts 8-bit floating point format, through a small amount of calibration data (without training) to pre-determine the quantization scale, the model weights and activation values will be converted to FP8 format, to improve the inference efficiency and reduce the deployment threshold. We you can use AngleSlim quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
269
+
270
+ ### Int4 Quantization
271
+ We use the GPTQ and AWQ algorithm to achieve W4A16 quantization.
272
+
273
+ GPTQ processes the model weights layer by layer, uses a small amount of calibration data to minimize the reconfiguration error of the quantized weights, and adjusts the weights layer by layer by the optimization process of approximating the Hessian inverse matrix. The process eliminates the need to retrain the model and requires only a small amount of calibration data to quantize the weights, improving inference efficiency and lowering the deployment threshold.
274
+ AWQ using a small amount of calibration data (without the need for training), the amplitude of the activation values is statistically calculated. For each weight channel, a scaling coefficient s is computed to expand the numerical range of important weights, allowing more information to be retained during quantization.
275
+
276
+ You can use [AngleSlim](https://github.com/tencent/AngelSlim) quantization, you can also directly download our quantization completed open source model to use [LINK](https://huggingface.co/).
277
+
278
+
279
+
280
+ #### Quantization Benchmark
281
+ This subsection describes the Benchmark metrics for the Hunyuan quantitative model.
282
+
283
+ | Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
284
+ |:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
285
+ | DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
286
+ | GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
287
+ | OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>71.1<br>71.2 | 76.5<br>76.6<br>76.2<br>76.4 |
288
+ | AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
289
+
290
+
291
+ ## Deployment
292
+
293
+ For deployment, you can use frameworks such as **TensorRT-LLM**, **vLLM**, or **SGLang** to serve the model and create an OpenAI-compatible API endpoint.
294
+
295
+ image: https://hub.docker.com/r/hunyuaninfer/hunyuan-7B/tags
296
+
297
+
298
+ ### TensorRT-LLM
299
+
300
+ #### Docker Image
301
+
302
+ We provide a pre-built Docker image based on the latest version of TensorRT-LLM.
303
+
304
+ We use tencent/Hunyuan-7B-Instruct for example
305
+ - To get started:
306
+
307
+ https://hub.docker.com/r/hunyuaninfer/hunyuan-large/tags
308
+
309
+ ```
310
+ docker pull hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
311
+ ```
312
+ ```
313
+ docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-7B:hunyuan-moe-7B-trtllm
314
+ ```
315
+
316
+ - Prepare Configuration file:
317
+
318
+ ```
319
+ cat >/path/to/extra-llm-api-config.yml <<EOF
320
+ use_cuda_graph: true
321
+ cuda_graph_padding_enabled: true
322
+ cuda_graph_batch_sizes:
323
+ - 1
324
+ - 2
325
+ - 4
326
+ - 8
327
+ - 16
328
+ - 32
329
+ print_iter_log: true
330
+ EOF
331
+ ```
332
+
333
+
334
+ - Start the API server:
335
+
336
+
337
+ ```
338
+ trtllm-serve \
339
+ /path/to/HunYuan-moe-7B \
340
+ --host localhost \
341
+ --port 8000 \
342
+ --backend pytorch \
343
+ --max_batch_size 32 \
344
+ --max_num_tokens 16384 \
345
+ --tp_size 2 \
346
+ --kv_cache_free_gpu_memory_fraction 0.6 \
347
+ --trust_remote_code \
348
+ --extra_llm_api_options /path/to/extra-llm-api-config.yml
349
+ ```
350
+
351
+
352
+ ### vllm
353
+
354
+ #### Start
355
+ Please use vLLM version v0.10.0 or higher for inference.
356
+
357
+ We use tencent/Hunyuan-7B-Instruct for example
358
+ - Download Model file:
359
+ - Huggingface: will download automicly by vllm.
360
+ - ModelScope: `modelscope download --model Tencent-Hunyuan/Hunyuan-7B-Instruct`
361
+
362
+ - model download by huggingface:
363
+ ```shell
364
+ export MODEL_PATH=tencent/Hunyuan-7B-Instruct
365
+ ```
366
+
367
+ - model downloaded by modelscope:
368
+ ```shell
369
+ export MODEL_PATH=/root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-7B-Instruct/
370
+ ```
371
+
372
+ - Start the API server:
373
+
374
+ ```shell
375
+ python3 -m vllm.entrypoints.openai.api_server \
376
+ --host 0.0.0.0 \
377
+ --port 8000 \
378
+ --trust-remote-code \
379
+ --model ${MODEL_PATH} \
380
+ --tensor-parallel-size 1 \
381
+ --dtype bfloat16 \
382
+ --quantization experts_int8 \
383
+ --served-model-name hunyuan \
384
+ 2>&1 | tee log_server.txt
385
+ ```
386
+ - After running service script successfully, run the request script
387
+ ```shell
388
+ curl http://0.0.0.0:8000/v1/chat/completions -H 'Content-Type: application/json' -d '{
389
+ "model": "hunyuan",
390
+ "messages": [
391
+ {
392
+ "role": "system",
393
+ "content": [{"type": "text", "text": "You are a helpful assistant."}]
394
+ },
395
+ {
396
+ "role": "user",
397
+ "content": [{"type": "text", "text": "请按面积大小对四大洋进行排序,并给出面积最小的洋是哪一个?直接输出结果。"}]
398
+ }
399
+ ],
400
+ "max_tokens": 2048,
401
+ "temperature":0.7,
402
+ "top_p": 0.6,
403
+ "top_k": 20,
404
+ "repetition_penalty": 1.05,
405
+ "stop_token_ids": [127960]
406
+ }'
407
+ ```
408
+ #### Quantitative model deployment
409
+ This section describes the process of deploying a post-quantization model using vLLM.
410
+
411
+ Default server in BF16.
412
+
413
+ ##### Int8 quantitative model deployment
414
+ Deploying the Int8-weight-only version of the HunYuan-7B model only requires setting the environment variables
415
+
416
+ Next we start the Int8 service. Run:
417
+ ```shell
418
+ python3 -m vllm.entrypoints.openai.api_server \
419
+ --host 0.0.0.0 \
420
+ --port 8000 \
421
+ --trust-remote-code \
422
+ --model ${MODEL_PATH} \
423
+ --tensor-parallel-size 1 \
424
+ --dtype bfloat16 \
425
+ --served-model-name hunyuan \
426
+ --quantization experts_int8 \
427
+ 2>&1 | tee log_server.txt
428
+ ```
429
+
430
+
431
+ ##### Int4 quantitative model deployment
432
+ Deploying the Int4-weight-only version of the HunYuan-7B model only requires setting the environment variables , using the GPTQ method
433
+ ```shell
434
+ export MODEL_PATH=PATH_TO_INT4_MODEL
435
+ ```
436
+ Next we start the Int4 service. Run
437
+ ```shell
438
+ python3 -m vllm.entrypoints.openai.api_server \
439
+ --host 0.0.0.0 \
440
+ --port 8000 \
441
+ --trust-remote-code \
442
+ --model ${MODEL_PATH} \
443
+ --tensor-parallel-size 1 \
444
+ --dtype bfloat16 \
445
+ --served-model-name hunyuan \
446
+ --quantization gptq_marlin \
447
+ 2>&1 | tee log_server.txt
448
+ ```
449
+
450
+ ##### FP8 quantitative model deployment
451
+ Deploying the W8A8C8 version of the HunYuan-7B model only requires setting the environment variables
452
+
453
+
454
+ Next we start the FP8 service. Run
455
+ ```shell
456
+ python3 -m vllm.entrypoints.openai.api_server \
457
+ --host 0.0.0.0 \
458
+ --port 8000 \
459
+ --trust-remote-code \
460
+ --model ${MODEL_PATH} \
461
+ --tensor-parallel-size 1 \
462
+ --dtype bfloat16 \
463
+ --served-model-name hunyuan \
464
+ --kv-cache-dtype fp8 \
465
+ 2>&1 | tee log_server.txt
466
+ ```
467
+
468
+
469
+
470
+
471
+ ### SGLang
472
+
473
+ #### Docker Image
474
+
475
+ We also provide a pre-built Docker image based on the latest version of SGLang.
476
+
477
+ We use tencent/Hunyuan-7B-Instruct for example
478
+
479
+ To get started:
480
+
481
+ - Pull the Docker image
482
+
483
+ ```
484
+ docker pull lmsysorg/sglang:latest
485
+ ```
486
+
487
+ - Start the API server:
488
+
489
+ ```
490
+ docker run --entrypoint="python3" --gpus all \
491
+ --shm-size 32g \
492
+ -p 30000:30000 \
493
+ --ulimit nproc=10000 \
494
+ --privileged \
495
+ --ipc=host \
496
+ lmsysorg/sglang:latest \
497
+ -m sglang.launch_server --model-path hunyuan/huanyuan_7B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
498
+ ```
499
+
500
+
501
+ ## Contact Us
502
+
503
+ If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
README_CN.md ADDED
@@ -0,0 +1,748 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="left">
2
+ <a href="README.md">English</a> | 中文</a>&nbsp
3
+ </p>
4
+ <br><br>
5
+
6
+ <p align="center">
7
+ <img src="https://dscache.tencent-cloud.cn/upload/uploader/hunyuan-64b418fd052c033b228e04bc77bbc4b54fd7f5bc.png" width="400"/> <br>
8
+ </p><p></p>
9
+
10
+
11
+ <p align="center">
12
+ 🤗&nbsp;<a href="https://huggingface.co/tencent/"><b>Hugging Face</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
13
+ <img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/>&nbsp;<a href="https://modelscope.cn/models/Tencent-Hunyuan/"><b>ModelScope</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
14
+ <img src="https://cdn-avatars.huggingface.co/v1/production/uploads/6594d0c6c5f1cd69a48b261d/04ZNQlAfs08Bfg4B1o3XO.png" width="14"/>&nbsp;<a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
15
+ </p>
16
+
17
+ <p align="center">
18
+ 🖥️&nbsp;<a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
19
+ 🕖&nbsp;<a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a>&nbsp;&nbsp;|&nbsp;&nbsp;
20
+ 🕹️&nbsp;<a href="https://hunyuan.tencent.com/"><b>Demo</b></a>&nbsp;&nbsp;&nbsp;&nbsp;
21
+ </p>
22
+
23
+ <p align="center">
24
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
25
+ <a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
26
+ <a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a>
27
+ </p>
28
+
29
+
30
+
31
+
32
+ ## 模型介绍
33
+
34
+ 混元是腾讯开源的高效大语言模型系列,专为多样化计算环境中的灵活部署而设计。从边缘设备到高并发生产系统,这些模型凭借先进的量化支持和超长上下文能力,在各种场景下都能提供最优性能。
35
+
36
+ 我们发布了一系列混元稠密模型,包括预训练和指令微调两种变体,参数规模涵盖0.5B、1.8B、4B和7B。这些模型采用了与混元-A13B相似的训练策略,因此继承了其强大的性能特征。这个全面的模型家族支持灵活的部署优化 - 从使用小尺寸的模型适配资源受限边缘计算场景,到使用较大尺寸的高性能模型支持高并发低延迟的复杂推理生产环境,在各种场景下都能保持强大的能力。
37
+
38
+
39
+ ### 核心特性与优势
40
+ - ​**混合推理支持**​:同时支持快思考和慢思考两种模式,支持用户灵活选择
41
+ - ​**超长上下文理解**​:原生支持256K上下文窗口,在长文本任务中保持稳定性能
42
+ - ​**增强Agent能力**​:优化Agent能力,在BFCL-v3、τ-Bench、C3-Bench等智能体基准测试中领先
43
+ - ​**高效推理**​:采用分组查询注意力(GQA)策略,支持多量化格式,实现高效推理
44
+
45
+ ## 新闻
46
+ <br>
47
+
48
+ * 2025.7.30 我们在Hugging Face开源了 **Hunyuan-0.5B-Pretrain** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-4B-Pretrain** , **Hunyuan-7B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Instruct**。
49
+
50
+ ## Benchmark评估榜单
51
+ | Model | Hunyuan-0.5B-Pretrain | Hunyuan-1.8B-Pretrain | Hunyuan-4B-Pretrain | Hunyuan-7B-Pretrain|
52
+ |:------------------:|:---------------:|:--------------:|:-------------:|:---------------:|
53
+ | MMLU | 54.02 | 64.62 | 74.01 | 79.82 |
54
+ | MMLU-Redux | 54.72 | 64.42 | 73.53 | 79 |
55
+ | MMLU-Pro | 31.15 | 38.65 | 51.91 | 57.79 |
56
+ | SuperGPQA | 17.23 | 24.98 | 27.28 | 30.47 |
57
+ | BBH | 45.92 | 74.32 | 75.17 | 82.95 |
58
+ | GPQA | 27.76 | 35.81 | 43.52 | 44.07 |
59
+ | GSM8K | 55.64 | 77.26 | 87.49 | 88.25 |
60
+ | MATH | 42.95 | 62.85 | 72.25 | 74.85 |
61
+ | EvalPlus | 39.71 | 60.67 | 67.76 | 66.96 |
62
+ | MultiPL-E | 21.83 | 45.92 | 59.87 | 60.41 |
63
+ | MBPP | 43.38 | 66.14 | 76.46 | 76.19 |
64
+ | CRUX-O | 30.75 | 36.88 | 56.5 | 60.75 |
65
+ | Chinese SimpleQA | 12.51 | 22.31 | 30.53 | 38.86 |
66
+ | simpleQA (5shot) | 2.38 | 3.61 | 4.21 | 5.69 |
67
+
68
+
69
+ | Topic | Bench | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct|
70
+ |:-------------------:|:----------------------------------------------------:|:-------------:|:------------:|:-----------:|:---------------------:|
71
+ | **Mathematics** | AIME 2024<br>AIME 2025<br>MATH | 17.2<br>20<br>48.5 | 56.7<br>53.9<br>86 | 78.3<br>66.5<br>92.6 | 81.1<br>75.3<br>93.7 |
72
+ | **Science** | GPQA-Diamond<br>OlympiadBench | 23.3<br>29.6 | 47.2<br>63.4 | 61.1<br>73.1 | 60.1<br>76.5 |
73
+ | **Coding** | Livecodebench<br>Fullstackbench | 11.1<br>20.9 | 31.5<br>42 | 49.4<br>54.6 | 57<br>56.3 |
74
+ | **Reasoning** | BBH<br>DROP<br>ZebraLogic | 40.3<br>52.8<br>34.5 | 64.6<br>76.7<br>74.6 | 83<br>78.2<br>83.5 | 87.8<br>85.9<br>85.1 |
75
+ | **Instruction<br>Following** | IF-Eval<br>SysBench | 49.7<br>28.1 | 67.6<br>55.5 | 76.6<br>68 | 79.3<br>72.7 |
76
+ | **Agent** | BFCL v3<br> τ-Bench<br>ComplexFuncBench<br> C3-Bench | 49.8<br>14.4<br>13.9<br>45.3 | 58.3<br>18.2<br>22.3<br>54.6 | 67.9<br>30.1<br>26.3<br>64.3 | 70.8<br>35.3<br>29.2<br>68.5 |
77
+ | **Long<br>Context** | PenguinScrolls<br>longbench-v2<br>FRAMES | 53.9<br>34.7<br>41.9 | 73.1<br>33.2<br>55.6 | 83.1<br>44.1<br>79.2 | 82<br>43<br>78.6 |
78
+
79
+ &nbsp;
80
+
81
+ ## 使用 transformers 推理
82
+
83
+ 我们的模型默认使用慢思考进行推理,有两种方法可以禁用 CoT 推理。
84
+
85
+ 1. 调用 apply_chat_template 时传递 **enable_thinking=False**。
86
+ 2. 在 prompt 前添加 **/no_think** 将会强制模型不使用 CoT 推理。同理,在 prompt 前添加 **/think** 将会强制模型执行 CoT 推理。
87
+
88
+ 以下代码片段展示了如何使用 transformers 库加载和使用模型。它还演示了如何禁用推理模式,以及如何解析出“推理过程”和“最终输出”。
89
+
90
+ ```python
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+ import os
93
+ import re
94
+
95
+ model_name_or_path = os.environ['MODEL_PATH']
96
+ # model_name_or_path = "tencent/Hunyuan-7B-Instruct"
97
+
98
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
99
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto",trust_remote_code=True) # You may want to use bfloat16 and/or move to GPU here
100
+ messages = [
101
+ {"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
102
+ ]
103
+ tokenized_chat = tokenizer.apply_chat_template(
104
+ messages,
105
+ tokenize=False
106
+ add_generation_prompt=True,
107
+ enable_thinking=True
108
+ )
109
+
110
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
111
+ model_inputs.pop("token_type_ids", None)
112
+ outputs = model.generate(**model_inputs, max_new_tokens=4096)
113
+ output_text = tokenizer.decode(outputs[0])
114
+
115
+ think_pattern = r'<think>(.*?)</think>'
116
+ think_matches = re.findall(think_pattern, output_text, re.DOTALL)
117
+
118
+ answer_pattern = r'<answer>(.*?)</answer>'
119
+ answer_matches = re.findall(answer_pattern, output_text, re.DOTALL)
120
+
121
+ think_content = [match.strip() for match in think_matches][0]
122
+ answer_content = [match.strip() for match in answer_matches][0]
123
+ print(f"thinking_content:{think_content}\n\n")
124
+ print(f"answer_content:{answer_content}\n\n")
125
+ ```
126
+
127
+
128
+ 我们推荐使用下面这组参数进行推理。注意,我们的模型没有默认 system_prompt。
129
+
130
+ ```json
131
+
132
+ {
133
+ "do_sample": true,
134
+ "top_k": 20,
135
+ "top_p": 0.8,
136
+ "repetition_penalty": 1.05,
137
+ "temperature": 0.7
138
+ }
139
+ ```
140
+
141
+ &nbsp;
142
+
143
+ ## 训练数据格式处理
144
+
145
+ 如果需要微调我们的 Instruct 模型,建议将数据处理成以下格式,分别对应慢思考和快思考的场景。
146
+
147
+ ```python
148
+ # think_pattern
149
+ think = ""
150
+ answer = ""
151
+ think_pattern = f"<think>\n{think}\n</think>\n<answer>\n{answer}\n</answer>"
152
+
153
+ # fast think pattern
154
+ messages = [
155
+ {"role": "system", "content": "You are a helpful assistant."},
156
+ {"role": "user", "content": "/no_think海水为什么是咸的" },
157
+ {"role": "assistant", "content": "<think>\n\n</think>\n<answer>\n海水是咸的主要是因为其中含有许多溶解在水中的盐类和矿物质。这些盐类和矿物质来自于地球表面的岩石和土壤中的化学物质,随着时间的推移,它们被带到了海洋中。当海水蒸发时,水分蒸发掉了,但盐类和矿物质仍然留在水中,导致海水变得更加咸味。因此,海水的咸度是由其中的盐类和矿物质的含量决定的。\n</answer>"}
158
+ ]
159
+
160
+ # slow think pattern
161
+ messages = [
162
+ {"role": "system", "content": "You are a helpful assistant."},
163
+ {"role": "user", "content": "1+1=" },
164
+ {"role": "assistant", "content": "<think>\n嗯,用户问的是1加1等于多少。首先,我需要确认这是一个基本的算术问题。1加1在十进制的数学体系中,通常的结果是2。不过,可能需要考虑是否有其他情况,比如二进制或者其他数制,但用户没有特别说明,所以默认应该是十进制。另外,有时候可能会有脑筋急转弯的情况,比如在某些语境下1+1可能等于1(比如1滴水加1滴水还是1滴水),但通常数学问题中都是2。所以最准确的回答应该是2。</think>\n<answer>\n在十进制的基本算术运算中,1加1的结果是2。这是数学中最基础的加法运算之一,遵循自然数的加法规则。因此,1 + 1 = 2。\n</answer>"}
165
+ ]
166
+
167
+ from transformers import AutoTokenizer
168
+ tokenizer = AutoTokenizer.from_pretrained("your_tokenizer_path", trust_remote_code=True)
169
+ train_ids = tokenizer.apply_chat_template(messages)
170
+ ```
171
+
172
+ &nbsp;
173
+
174
+ ## 使用 LLaMA-Factory 训练
175
+
176
+ 我们将介绍如何使用`LLaMA-Factory`来进行微调混元模型。
177
+
178
+ ### 安装环境
179
+
180
+ 开始之前,确保你已经安装了以下代码库:
181
+ 1. 使用[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)官方指导进行安装。
182
+ 2. 使用[DeepSpeed](https://github.com/deepspeedai/DeepSpeed#installation)官方指导进行安装(可选)。
183
+ 3. 安装配套的transformer库。当前混元提交的transformer代码正在评审中,需要获取配套的分支。
184
+ ```
185
+ pip install git+https://github.com/huggingface/transformers@4970b23cedaf745f963779b4eae68da281e8c6ca
186
+ ```
187
+
188
+ ### 准备数据
189
+
190
+ 我们需要准备自定义的数据集:
191
+
192
+ 1. 请将您的数据以`json`格式进行组织,并将数据放入`LLaMA-Factory`的`data`目录中。当前使用的是`sharegpt`格式的数据集,需要遵循以下格式:
193
+ ```
194
+ [
195
+ {
196
+ "messages": [
197
+ {
198
+ "role": "system",
199
+ "content": "系统提示词(选填)"
200
+ },
201
+ {
202
+ "role": "user",
203
+ "content": "人类指令"
204
+ },
205
+ {
206
+ "role": "assistant",
207
+ "content": "模型回答"
208
+ }
209
+ ]
210
+ }
211
+ ]
212
+ ```
213
+ 可以参考前面章节中对[数据格式](#训练数据格式处理)的说明。
214
+
215
+ 2. 在`data/dataset_info.json`文件中提供您的数据集定义,并采用以下格式:
216
+ ```
217
+ "数据集名称": {
218
+ "file_name": "data.json",
219
+ "formatting": "sharegpt",
220
+ "columns": {
221
+ "messages": "messages"
222
+ },
223
+ "tags": {
224
+ "role_tag": "role",
225
+ "content_tag": "content",
226
+ "user_tag": "user",
227
+ "assistant_tag": "assistant",
228
+ "system_tag": "system"
229
+ }
230
+ }
231
+ ```
232
+
233
+ ### 训练
234
+
235
+ 1. 将`train/llama_factory_support/example_configs`目录下的文件都拷贝到`LLaMA-Factory`的`example/hunyuan`目录下。
236
+ 2. 修改配置文件`hunyuan_full.yaml`中的模型路径和数据集名称,其他的配置请根据需要进行修改。
237
+ ```
238
+ ### model
239
+ model_name_or_path: [!!!add the model path here!!!]
240
+
241
+ ### dataset
242
+ dataset: [!!!add the data set name here!!!]
243
+ ```
244
+ 3. 执行训练命令
245
+ * 运行单机训练
246
+ 请注意这里需要设置`DISABLE_VERSION_CHECK`环境变量,避免版本冲突。
247
+ ```
248
+ export DISABLE_VERSION_CHECK=1
249
+ llamafactory-cli train examples/hunyuan/hunyuan_full.yaml
250
+ ```
251
+ * 运行多机训练
252
+ 在每个节点上执行以下命令。请注意将`torchrun`需要的`NNODES`、`NODE_RANK`、`MASTER_ADDR`和`MASTER_PORT`按照您运行的环境进行配置。
253
+ ```
254
+ export DISABLE_VERSION_CHECK=1
255
+ FORCE_TORCHRUN=1 NNODES=${NNODES} NODE_RANK=${NODE_RANK} MASTER_ADDR=${MASTER_ADDR} MASTER_PORT=${MASTER_PORT} \
256
+ llamafactory-cli train examples/hunyuan_full.yaml
257
+ ```
258
+
259
+ &nbsp;
260
+
261
+ ## 量化压缩
262
+
263
+ 我们使用了 [AngleSlim](https://github.com/tencent/AngelSlim) 压缩工具来生成 FP8 和 INT4 量化模型。`AngleSlim` 是一款专门致力于打造更易用、更全面且更高效的模型压缩解决方案的工具。
264
+
265
+ ### FP8 量化
266
+ 我们采用FP8-static量化,FP8量化采用8位浮点格式,通过少量校准数据(无需训练)预先确定量化scale,将模型权重与激活值转换为FP8格式,提升推理效率并降低部署门槛。 我们您可以使用AngleSlim量化,你也可以直接下载我们量化完成的开源模型使用[LINK](https://huggingface.co/).
267
+
268
+ ### Int4 Quantization
269
+ Int4量化我们采用GPTQ和AWQ算法实现W4A16量化。
270
+
271
+ GPTQ算法采用逐层处理模型权重,利用少量校准数据最小化量化后的权重重构误差,通过近似Hessian逆矩阵的优化过程逐层调整权重。流程无需重新训练模型,仅需少量校准数据即可量化权重,提升推理效率并降低部署门槛。
272
+ AWQ使用少量校准数据(无需进行训练)来计算激活值的幅度,从而进行统计计算。对于每个权重通道,都会计算一个缩放系数s,以扩大重要权重的数值表达范围,从而在量化过程中能够保留更多的信息。
273
+
274
+ 您可以使用 [AngleSlim](https://github.com/tencent/AngelSlim) 量化,也可以直接下载我们量化完成的开源模型使用 [LINK](https://huggingface.co/) 。
275
+
276
+
277
+ #### 量化 Benchmark
278
+ 本小节介绍了混元量化模型的基准指标。
279
+
280
+ | Bench | Quantization | Hunyuan-0.5B-Instruct | Hunyuan-1.8B-Instruct | Hunyuan-4B-Instruct | Hunyuan-7B-Instruct |
281
+ |:-------------:|:---------------------------------:|:----------------------------:|:------------------------------:|:----------------------------:|:----------------------------:|
282
+ | DROP | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 52.8<br>51.6<br>50.9<br>48.9 | 76.7<br>75.1<br>73.0<br>71.7 | 78.2<br>78.3<br>78.1<br>78.2 | 85.9<br>86.0<br>85.7<br>85.9 |
283
+ | GPQA-Diamond | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 23.3<br>22.5<br>23.3<br>23.3 | 47.2<br>47.7<br>44.43<br>43.62 | 61.1<br>60.2<br>58.1<br>- | 60.1<br>60.1<br>60.0<br>60.1 |
284
+ | OlympiadBench | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 29.6<br>29.6<br>26.8<br>26.3 | 63.4<br>62.5<br>60.9<br>61.7 | 73.1<br>73.1<br>72.9<br>72.8 | 76.5<br>76.6<br>76.2<br>76.4 |
285
+ | AIME 2024 | B16<br>FP8<br>Int4GPTQ<br>Int4AWQ | 17.2<br>17.2<br>-<br>- | 56.7<br>55.17<br>-<br>- | 78.3<br>76.6<br>-<br>- | 81.1<br>80.9<br>81.0<br>80.9 |
286
+
287
+
288
+
289
+ &nbsp;
290
+
291
+ ## 推理和部署
292
+
293
+ HunyuanLLM可以采用TensorRT-LLM, vLLM或sglang部署。为了简化部署过程HunyuanLLM提供了预构建docker镜像,详见一下章节。
294
+
295
+ 镜像:https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags
296
+
297
+ ## 使用TensorRT-LLM推理
298
+ ### Docker:
299
+
300
+ 为了简化部署过程,HunyuanLLM提供了预构建docker镜像 (注意: 该镜像要求Host的Cuda版本为12.8以上):
301
+
302
+ [hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm](https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags) 。您只需要下载模型文件并用下面代码启动docker即可开始推理模型。
303
+ ```shell
304
+ # 拉取
305
+ 国内:
306
+ docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-trtllm
307
+ 国外:
308
+ docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
309
+
310
+ # 启动
311
+ docker run --privileged --user root --name hunyuanLLM_infer --rm -it --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --gpus=all hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-trtllm
312
+ ```
313
+
314
+ 注: Docker容器权限管理。以上代码采用特权模式(--privileged)启动Docker容器会赋予容器较高的权限,增加数据泄露和集群安全风险。建议在非必要情况下避免使用特权模式,以降低安全威胁。对于必须使用特权模式的场景,应进行严格的安全评估,并实施相应的安全监控、加固措施。
315
+
316
+ ### BF16部署
317
+
318
+ #### Step1:执行推理
319
+
320
+ #### 方式1:命令行推理
321
+
322
+ 下面我们展示一个代码片段,采用`TensorRT-LLM`快速请求chat model:
323
+ 修改 examples/pytorch/quickstart_advanced.py 中如下代码:
324
+
325
+
326
+ ```python
327
+ def setup_llm(args):
328
+ kv_cache_config = KvCacheConfig(
329
+ enable_block_reuse=not args.disable_kv_cache_reuse,
330
+ free_gpu_memory_fraction=args.kv_cache_fraction,
331
+ )
332
+ spec_config = None
333
+
334
+ hf_ckpt_path="$your_hunyuan_model_path"
335
+ tokenizer = AutoTokenizer.from_pretrained(hf_ckpt_path, trust_remote_code=True)
336
+ llm = LLM(
337
+ tokenizer=tokenizer,
338
+ model=args.model_dir,
339
+ backend='pytorch',
340
+ disable_overlap_scheduler=args.disable_overlap_scheduler,
341
+ kv_cache_dtype=args.kv_cache_dtype,
342
+ kv_cache_config=kv_cache_config,
343
+ attn_backend=args.attention_backend,
344
+ use_cuda_graph=args.use_cuda_graph,
345
+ cuda_graph_padding_enabled=args.cuda_graph_padding_enabled,
346
+ cuda_graph_batch_sizes=args.cuda_graph_batch_sizes,
347
+ load_format=args.load_format,
348
+ print_iter_log=args.print_iter_log,
349
+ enable_iter_perf_stats=args.print_iter_log,
350
+ torch_compile_config=TorchCompileConfig(
351
+ enable_fullgraph=args.use_torch_compile,
352
+ enable_inductor=args.use_torch_compile,
353
+ enable_piecewise_cuda_graph= \
354
+ args.use_piecewise_cuda_graph)
355
+ if args.use_torch_compile else None,
356
+ moe_backend=args.moe_backend,
357
+ enable_trtllm_sampler=args.enable_trtllm_sampler,
358
+ max_seq_len=args.max_seq_len,
359
+ max_batch_size=args.max_batch_size,
360
+ max_num_tokens=args.max_num_tokens,
361
+ enable_attention_dp=args.enable_attention_dp,
362
+ tensor_parallel_size=args.tp_size,
363
+ pipeline_parallel_size=args.pp_size,
364
+ moe_expert_parallel_size=args.moe_ep_size,
365
+ moe_tensor_parallel_size=args.moe_tp_size,
366
+ moe_cluster_parallel_size=args.moe_cluster_size,
367
+ enable_chunked_prefill=args.enable_chunked_prefill,
368
+ speculative_config=spec_config,
369
+ trust_remote_code=args.trust_remote_code,
370
+ gather_generation_logits=args.return_generation_logits)
371
+
372
+ sampling_params = SamplingParams(
373
+ end_id=127960,
374
+ max_tokens=args.max_tokens,
375
+ temperature=args.temperature,
376
+ top_k=args.top_k,
377
+ top_p=args.top_p,
378
+ return_context_logits=args.return_context_logits,
379
+ return_generation_logits=args.return_generation_logits,
380
+ logprobs=args.logprobs)
381
+ return llm, sampling_params
382
+
383
+
384
+ def main():
385
+ args = parse_arguments()
386
+ prompts = args.prompt if args.prompt else example_prompts
387
+
388
+ llm, sampling_params = setup_llm(args)
389
+ new_prompts = []
390
+ for prompt in prompts:
391
+ messages = [{"role": "user", "content": f"{prompt}"}]
392
+ new_prompts.append(
393
+ llm.tokenizer.apply_chat_template(messages,
394
+ tokenize=False,
395
+ add_generation_prompt=True))
396
+ prompts = new_prompts
397
+ outputs = llm.generate(prompts, sampling_params)
398
+
399
+ for i, output in enumerate(outputs):
400
+ prompt = output.prompt
401
+ generated_text = output.outputs[0].text
402
+ print(f"[{i}] Prompt: {prompt!r}, Generated text: {generated_text!r}")
403
+ ```
404
+
405
+ 运行方式:
406
+
407
+ ```shell
408
+ python3 quickstart_advanced.py --model_dir "HunyuanLLM模型路径" --tp_size 4
409
+ ```
410
+
411
+ #### 方式2:服务化推理
412
+
413
+ 下面我们展示使用`TensorRT-LLM`服务化的方式部署模型和请求。
414
+
415
+ 准备配置文件:
416
+
417
+ ```
418
+ cat >/path/to/extra-llm-api-config.yml <<EOF
419
+ use_cuda_graph: true
420
+ cuda_graph_padding_enabled: true
421
+ cuda_graph_batch_sizes:
422
+ - 1
423
+ - 2
424
+ - 4
425
+ - 8
426
+ - 16
427
+ - 32
428
+ print_iter_log: true
429
+ EOF
430
+ ```
431
+
432
+ 启动服务:
433
+
434
+ ```shell
435
+ trtllm-serve \
436
+ /path/to/HunYuan-moe-A13B \
437
+ --host localhost \
438
+ --port 8000 \
439
+ --backend pytorch \
440
+ --max_batch_size 32 \
441
+ --max_num_tokens 16384 \
442
+ --tp_size 2 \
443
+ --kv_cache_free_gpu_memory_fraction 0.6 \
444
+ --trust_remote_code \
445
+ --extra_llm_api_options /path/to/extra-llm-api-config.yml
446
+ ```
447
+
448
+ 服务启动成功后, 使用 OpenAI API 进行模型推理调用:
449
+ ```
450
+ curl -X POST "http://localhost:8000/v1/chat/completions" \
451
+ -H "Content-Type: application/json" \
452
+ --data '{
453
+ "model": "HunYuan/HunYuan-80B-A13B",
454
+ "messages": [
455
+ {
456
+ "role": "user",
457
+ "content": "Write a short summary of the benefits of regular exercise"
458
+ }
459
+ ]
460
+ }'
461
+ ```
462
+
463
+ #### FP8/Int4量化模型部署:
464
+ 目前 TensorRT-LLM 的 fp8 和 int4 量化模型正在支持中,敬请期待。
465
+
466
+
467
+ ## 使用vLLM推理
468
+ ### Docker:
469
+
470
+ 为了简化部署过程,HunyuanLLM提供了预构建docker镜像 (注意: 该镜像要求Host的Cuda版本为12.8以上):
471
+
472
+ [hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm](https://hub.docker.com/r/hunyuaninfer/hunyuan-a13b/tags) 。您只需要下载模型文件并用下面代码启动docker即可开始推理模型。
473
+ ```shell
474
+ # 下载模型:
475
+ # ModelScope:
476
+ modelscope download --model Tencent-Hunyuan/Hunyuan-A13B-Instruct
477
+ # Huggingface: vllm 会自动下载
478
+
479
+ # 拉取
480
+ 国内:
481
+ docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm
482
+ 国外:
483
+ docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-vllm
484
+
485
+ # 使用 huggingface 起服务
486
+ docker run --privileged --user root --net=host --ipc=host \
487
+ -v ~/.cache:/root/.cache/ \
488
+ --gpus=all -it --entrypoint python docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm \
489
+ -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 8000 \
490
+ --tensor-parallel-size 4 --model tencent/Hunyuan-A13B-Instruct --trust-remote-code
491
+
492
+ # 使用modelscope下载的模型起服务
493
+ docker run --privileged --user root --net=host --ipc=host \
494
+ -v ~/.cache/modelscope:/root/.cache/modelscope \
495
+ --gpus=all -it --entrypoint python docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-vllm \
496
+ -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --tensor-parallel-size 4 \
497
+ --port 8000 --model /root/.cache/modelscope/hub/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct/ --trust_remote_code
498
+ ```
499
+
500
+ 注: Docker容器权限管理。以上代码采用特权模式(--privileged)启动Docker容器会赋予容器较高的权限,增加数据泄露和集群安全风险。建议在非必要情况下避免使用特权模式,以降低安全威胁。对于必须使用特权模式的场景,应进行严格的安全评估,并实施相应的安全监控、加固措施。
501
+
502
+
503
+ ### BF16部署
504
+
505
+ BF16可以在2张显存超过80G的GPU卡上部署,如果长文推荐TP4。按如下步骤执行:
506
+
507
+ 运行命令前请先设置如下环境变量:
508
+
509
+ ```shell
510
+ export MODEL_PATH=PATH_TO_MODEL
511
+ ```
512
+
513
+ #### Step1:执行推理
514
+
515
+ #### 方式1:命令行推理
516
+
517
+ 下面我们展示一个代码片段,采用`vLLM`快速请求chat model:
518
+
519
+ 注: vLLM组件远程代码执行防护。下列代码中vLLM组件的trust-remote-code配置项若被启用,将允许加载并执行来自远程模型仓库的代码,这可能导致恶意代码的执行。除非业务需求明确要求,否则建议该配置项处于禁用状态,以降低潜在的安全威胁。
520
+
521
+
522
+ ```python
523
+ import os
524
+ from typing import List, Optional
525
+ from vllm import LLM, SamplingParams
526
+ from vllm.inputs import PromptType
527
+ from transformers import AutoTokenizer
528
+
529
+ model_path=os.environ.get('MODEL_PATH')
530
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
531
+
532
+ llm = LLM(model=model_path,
533
+ tokenizer=model_path,
534
+ trust_remote_code=True,
535
+ dtype='bfloat16',
536
+ tensor_parallel_size=4,
537
+ gpu_memory_utilization=0.9)
538
+
539
+ sampling_params = SamplingParams(
540
+ temperature=0.7, top_p=0.8, max_tokens=4096, top_k=20, repetition_penalty=1.05)
541
+
542
+ messages = [
543
+ {
544
+ "role": "system",
545
+ "content": "You are a helpful assistant.",
546
+ },
547
+ {"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
548
+ ]
549
+
550
+ tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
551
+
552
+ dummy_inputs: List[PromptType] = [{
553
+ "prompt_token_ids": batch
554
+ } for batch in tokenized_chat.numpy().tolist()]
555
+
556
+ outputs = llm.generate(dummy_inputs, sampling_params)
557
+
558
+ # Print the outputs.
559
+ for output in outputs:
560
+ prompt = output.prompt
561
+ generated_text = output.outputs[0].text
562
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
563
+ ```
564
+
565
+ #### 方式2:服务化推理
566
+
567
+ 下面我们展示使用`vLLM`服务化的方式部署模型并请求
568
+
569
+ 在主节点上运行:
570
+
571
+ ```shell
572
+ export VLLM_HOST_IP=${LOCAL_IP}
573
+ ```
574
+ 接着我们启动服务,运行 :
575
+ ```shell
576
+ cd inference
577
+ sh run_server.sh
578
+ ```
579
+
580
+ 运行`run_server.sh`成功后, 运行请求脚本:
581
+ ```shell
582
+ sh openapi.sh
583
+ ```
584
+
585
+ 注意修改`openapi.sh`中的`${LOCAL_IP}`和`${MODEL_PATH}`为服务对应值。
586
+
587
+
588
+ ### 量化模型部署:
589
+
590
+ 本部分介绍采用vLLM部署量化后模型的流程。
591
+
592
+ 镜像:部署镜像同BF16。
593
+
594
+
595
+ #### Int8量化模型部署:
596
+ 部署Int8-weight-only版本HunYuan-A13B模型只需设置`run_server_int8.sh`中的环境变量:
597
+ ```SHELL
598
+ export MODEL_PATH=PATH_TO_BF16_MODEL
599
+ ```
600
+
601
+ 接着我们启动Int8服务。运行:
602
+ ```shell
603
+ sh run_server_int8.sh
604
+ ```
605
+
606
+ 运行`run_server_int8.sh`成功后, 运行请求脚本:
607
+ ```shell
608
+ sh openapi.sh
609
+ ```
610
+
611
+ #### Int4量化模型部署:
612
+ 部署Int4-weight-only版本HunYuan-A13B模型只需设置`run_server_int4.sh`中的环境变量,采用GPTQ方式:
613
+ ```SHELL
614
+ export MODEL_PATH=PATH_TO_INT4_MODEL
615
+ ```
616
+
617
+ 接着我们启动Int4服务。运行:
618
+ ```shell
619
+ sh run_server_int4.sh
620
+ ```
621
+
622
+ 运行`run_server_int4.sh`成功后, 运行请求脚本:
623
+ ```shell
624
+ sh openapi.sh
625
+ ```
626
+
627
+ #### FP8量化模型部署:
628
+ 部署W8A8C8版本HunYuan-A13B模型只需设置`run_server_int8.sh`中的环境变量:
629
+ ```shell
630
+ export MODEL_PATH=PATH_TO_FP8_MODEL
631
+ ```
632
+
633
+ 接着我们启动FP8服务。运行:
634
+ ```shell
635
+ sh run_server_fp8.sh
636
+ ```
637
+
638
+ 运行`run_server_fp8.sh`成功后, 运行请求脚本:
639
+ ```shell
640
+ sh openapi.sh
641
+ ```
642
+ ## 使用sglang推理
643
+
644
+ ### BF16部署
645
+
646
+ #### Step1: 拉取镜像
647
+
648
+
649
+ ```
650
+ docker pull docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang
651
+
652
+ docker pull hunyuaninfer/hunyuan-a13b:hunyuan-moe-A13B-sglang
653
+ ```
654
+
655
+ - 启动 API server:
656
+
657
+ ```
658
+ docker run --gpus all \
659
+ --shm-size 32g \
660
+ -p 30000:30000 \
661
+ --ipc=host \
662
+ docker.cnb.cool/tencent/hunyuan/hunyuan-a13b:hunyuan-moe-A13B-sglang \
663
+ -m sglang.launch_server --model-path hunyuan/huanyuan_A13B --tp 4 --trust-remote-code --host 0.0.0.0 --port 30000
664
+ ```
665
+
666
+ #### Step2:执行推理
667
+
668
+ #### 方式1:命令行推理
669
+
670
+ 下面我们展示一个代码片段,采用`sglang`快速请求chat model:
671
+
672
+
673
+ ```python
674
+ import sglang as sgl
675
+ from transformers import AutoTokenizer
676
+
677
+ model_path=os.environ.get('MODEL_PATH')
678
+
679
+
680
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
681
+
682
+ messages = [
683
+ {
684
+ "role": "system",
685
+ "content": "You are a helpful assistant.",
686
+ },
687
+ {"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
688
+ ]
689
+ prompts = []
690
+ prompts.append(tokenizer.apply_chat_template(
691
+ messages,
692
+ tokenize=False,
693
+ add_generation_prompt=True
694
+ ))
695
+ print(prompts)
696
+
697
+ llm = sgl.Engine(
698
+ model_path=model_path,
699
+ tp_size=4,
700
+ trust_remote_code=True,
701
+ mem_fraction_static=0.7,
702
+ )
703
+
704
+ sampling_params = {"temperature": 0.7, "top_p": 0.8, "top_k": 20, "max_new_tokens": 4096}
705
+ outputs = llm.generate(prompts, sampling_params)
706
+ for prompt, output in zip(prompts, outputs):
707
+ print(f"Prompt: {prompt}\nGenerated text: {output['text']}")
708
+ ```
709
+
710
+ #### 方式2:服务化推理
711
+
712
+ 下面我们展示使用`sglang`服务化的方式部署模型和请求。
713
+
714
+ ```shell
715
+ model_path="HunyuanLLM模型路径"
716
+ python3 -u -m sglang.launch_server \
717
+ --model-path $model_path \
718
+ --tp 4 \
719
+ --trust-remote-code
720
+ ```
721
+
722
+ 服务启动成功后, 运行请求脚本:
723
+ ```python
724
+ import openai
725
+ client = openai.Client(
726
+ base_url="http://localhost:30000/v1", api_key="EMPTY")
727
+
728
+ response = client.chat.completions.create(
729
+ model="default",
730
+ messages= [
731
+ {"role": "user", "content": "Write a short summary of the benefits of regular exercise"},
732
+ ],
733
+ temperature=0.7,
734
+ max_tokens=4096,
735
+ extra_body={"top_p": 0.8, "top_k": 20}
736
+ )
737
+ print(response)
738
+ ```
739
+
740
+ #### FP8/Int4量化模型部署:
741
+ 目前 sglang 的 fp8 和 int4 量化模型正在支持中,敬请期待。
742
+
743
+ ## 交互式Demo Web
744
+ hunyuan-A13B 现已开放网页demo。访问 https://hunyuan.tencent.com/?model=hunyuan-a13b 即可简单体验我们的模型。
745
+
746
+
747
+ ## 联系我们
748
+ 如果你想给我们的研发和产品团队留言,欢迎联系我们腾讯混元LLM团队。你可以通过邮件([email protected])联系我们。