manaestras commited on
Commit
2998e1e
·
verified ·
1 Parent(s): b776608

Upload ./configuration_hunyuan.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. configuration_hunyuan.py +319 -0
configuration_hunyuan.py ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved.
3
+ """ HunYuan model configuration"""
4
+ from torch import nn
5
+ from transformers.configuration_utils import PretrainedConfig
6
+ from transformers.utils import logging
7
+ from typing import List, Union, Optional
8
+
9
+
10
+ logger = logging.get_logger(__name__)
11
+
12
+
13
+ class HunYuanConfig(PretrainedConfig):
14
+ r"""
15
+ This is the configuration class to store the configuration of a [`HunYuanModel`]. It is used to instantiate an
16
+ HunYuan model according to the specified arguments, defining the model architecture. Instantiating a configuration
17
+ with the defaults will yield a similar configuration to that of the HunYuan-7B.
18
+
19
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
20
+ documentation from [`PretrainedConfig`] for more information.
21
+
22
+
23
+ Args:
24
+ vocab_size (`int`, *optional*, defaults to 32000):
25
+ Vocabulary size of the HunYuan model. Defines the number of different tokens that can be represented by the
26
+ `inputs_ids` passed when calling [`HunYuanModel`]
27
+ hidden_size (`int`, *optional*, defaults to 4096):
28
+ Dimension of the hidden representations.
29
+ intermediate_size (`int`, *optional*, defaults to 11008):
30
+ Dimension of the MLP representations or shared MLP representations.
31
+ moe_intermediate_size (`int` or `List`, *optional*, defaults to 11008):
32
+ Dimension of the MLP representations in MoE. Use a list if you want a different size per layer.
33
+ num_hidden_layers (`int`, *optional*, defaults to 32):
34
+ Number of hidden layers in the Transformer decoder.
35
+ num_attention_heads (`int`, *optional*, defaults to 32):
36
+ Number of attention heads for each attention layer in the Transformer decoder.
37
+ num_key_value_heads (`int`, *optional*):
38
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
39
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
40
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
41
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
42
+ by meanpooling all the original heads within that group. For more details checkout [this
43
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
44
+ `num_attention_heads`.
45
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
46
+ The non-linear activation function (function or string) in the decoder.
47
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
48
+ The maximum sequence length that this model might ever be used with.
49
+ initializer_range (`float`, *optional*, defaults to 0.02):
50
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
51
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
52
+ The epsilon used by the rms normalization layers.
53
+ use_cache (`bool`, *optional*, defaults to `True`):
54
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
55
+ relevant if `config.is_decoder=True`.
56
+ pad_token_id (`int`, *optional*):
57
+ Padding token id.
58
+ bos_token_id (`int`, *optional*, defaults to 1):
59
+ Beginning of stream token id.
60
+ eos_token_id (`int`, *optional*, defaults to 2):
61
+ End of stream token id.
62
+ pretraining_tp (`int`, *optional*, defaults to 1):
63
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
64
+ document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
65
+ necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
66
+ issue](https://github.com/pytorch/pytorch/issues/76232).
67
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
68
+ Whether to tie weight embeddings
69
+ rope_theta (`float`, *optional*, defaults to 10000.0):
70
+ The base period of the RoPE embeddings.
71
+ rope_scaling (`Dict`, *optional*):
72
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
73
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
74
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
75
+ `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
76
+ these scaling strategies behave:
77
+ https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
78
+ experimental feature, subject to breaking API changes in future versions.
79
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
80
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
81
+ attention_dropout (`float`, *optional*, defaults to 0.0):
82
+ The dropout ratio for the attention probabilities.
83
+ use_qk_norm (`bool`, *optional*, defaults to `False`):
84
+ Whether query and key in attention use norm
85
+ use_cla (`bool`, *optional*, defaults to `False`):
86
+ Whether to use CLA in attention
87
+ cla_share_factor (`int`, *optional*, defaults to 1):
88
+ The share factor of CLA
89
+ num_experts (`int` or `List`, *optional*, defaults to 1):
90
+ The number of experts for moe. If it is a list, it will be used as the number of experts for each layer.
91
+ num_shared_expert (`int` or `List`, *optional*, defaults to 1):
92
+ The number of shared experts for moe. If it is a list, it will be used as the number of shared experts for each layer.
93
+ moe_topk (`int` or `List`, *optional*, defaults to 1):
94
+ The topk value for moe. If it is a list, it will be used as the topk value for each layer.
95
+ capacity_factor (Not used) (`float` or `List`, *optional*, defaults to 1.0):
96
+ The capacity factor for moe. If it is a list, it will be used as the capacity factor for each layer.
97
+ moe_layer_num_skipped (`int`, *optional*, defaults to 0):
98
+ First moe_layer_num_skipped layers do not use MoE.
99
+ """
100
+
101
+ model_type = "hunyuan"
102
+ keys_to_ignore_at_inference = ["past_key_values"]
103
+
104
+ def __init__(
105
+ self,
106
+ vocab_size=290943,
107
+ org_vocab_size=290943,
108
+ hidden_size=4096,
109
+ intermediate_size: int=11008,
110
+ moe_intermediate_size: Union[int, List]=None,
111
+ num_hidden_layers=32,
112
+ num_attention_heads=32,
113
+ num_key_value_heads=None,
114
+ attention_head_dim=None,
115
+ hidden_act="silu",
116
+ max_position_embeddings=2048,
117
+ initializer_range=0.02,
118
+ rms_norm_eps=1e-5,
119
+ use_cache=True,
120
+ pad_token_id=0,
121
+ bos_token_id=1,
122
+ eos_token_id=2,
123
+ eod_token_id=3,
124
+ sep_token_id=4,
125
+ im_start_id=5,
126
+ im_end_id=6,
127
+ text_start_id=7,
128
+ text_end_id=8,
129
+ image_token_id=9,
130
+ video_start_id=10,
131
+ video_end_id=11,
132
+ im_newline_id=12,
133
+ mask_init_id=13,
134
+ pretraining_tp=1,
135
+ tie_word_embeddings=False,
136
+ rope_theta=10000.0,
137
+ rope_scaling=None,
138
+ attention_bias=False,
139
+ mlp_bias=False,
140
+ attention_dropout=0.0,
141
+ use_qk_norm=False,
142
+ use_rotary_pos_emb=True,
143
+ use_cla=False,
144
+ cla_share_factor=1,
145
+ norm_type="hf_rms",
146
+ num_experts: Union[int, List]=1,
147
+ use_mixed_mlp_moe=False,
148
+ num_shared_expert: Union[int, List]=1,
149
+ moe_topk: Union[int, List]=1,
150
+ # capacity_factor: Union[int, List]=1.0,
151
+ moe_drop_tokens=False,
152
+ moe_random_routing_dropped_token=False,
153
+ use_mla=False,
154
+ kv_lora_rank=512,
155
+ q_lora_rank=1536,
156
+ qk_rope_head_dim=64,
157
+ v_head_dim=128,
158
+ qk_nope_head_dim=128,
159
+ moe_layer_num_skipped=0,
160
+ norm_topk_prob=True,
161
+ routed_scaling_factor=1.0,
162
+ group_limited_greedy=False,
163
+ n_group=None,
164
+ topk_group=None,
165
+ vit_path=None,
166
+ num_media_embeds=257,
167
+ vit_type="AnyResVit",
168
+ vit_input_resolution=224,
169
+ vit_token=64,
170
+ vit_patch=1,
171
+ vit_mapping_type="simple_conv_mlp",
172
+ vit_norm_type="fused",
173
+ vit_used_rms_norm=True,
174
+ vit_remove_prenorm=True,
175
+ vit_add_patchemb_bias=True,
176
+ anyres_vit_max_image_size=2048,
177
+ anyres_pooling_size=2,
178
+ anyres_vit_two_views=False,
179
+ skip_cls_token=False,
180
+ position_embedding_xdrope=False,
181
+ xdrope_section=None,
182
+ add_classification_head=False,
183
+ class_num=0,
184
+ pool_type="last",
185
+ pad_id=-1,
186
+ **kwargs,
187
+ ):
188
+ self.vocab_size = vocab_size
189
+ self.org_vocab_size = org_vocab_size
190
+ self.max_position_embeddings = max_position_embeddings
191
+ self.hidden_size = hidden_size
192
+ self.intermediate_size = intermediate_size
193
+ self.moe_intermediate_size = moe_intermediate_size
194
+ self.num_hidden_layers = num_hidden_layers
195
+ self.num_attention_heads = num_attention_heads
196
+ self.num_experts = num_experts
197
+ self.use_mixed_mlp_moe = use_mixed_mlp_moe
198
+ self.num_shared_expert = num_shared_expert
199
+ self.moe_topk = moe_topk
200
+ # self.capacity_factor = capacity_factor
201
+ self.moe_drop_tokens = moe_drop_tokens
202
+ self.moe_random_routing_dropped_token = moe_random_routing_dropped_token
203
+
204
+ if attention_head_dim is not None:
205
+ self.attention_head_dim = attention_head_dim
206
+ else:
207
+ self.attention_head_dim = self.hidden_size // num_attention_heads
208
+
209
+ # for backward compatibility
210
+ if num_key_value_heads is None:
211
+ num_key_value_heads = num_attention_heads
212
+
213
+ self.num_key_value_heads = num_key_value_heads
214
+ self.hidden_act = hidden_act
215
+ self.initializer_range = initializer_range
216
+ self.rms_norm_eps = rms_norm_eps
217
+ self.pretraining_tp = pretraining_tp
218
+ self.use_cache = use_cache
219
+ self.rope_theta = rope_theta
220
+ self.rope_scaling = rope_scaling
221
+ # self._rope_scaling_validation() # TODO: Need validation?
222
+ self.attention_bias = attention_bias
223
+ self.mlp_bias = mlp_bias
224
+ self.attention_dropout = attention_dropout
225
+ self.use_qk_norm = use_qk_norm
226
+ self.use_rotary_pos_emb = use_rotary_pos_emb
227
+ self.use_cla = use_cla
228
+ self.cla_share_factor = cla_share_factor
229
+ self.norm_type = norm_type
230
+ # MLA args
231
+ self.use_mla = use_mla
232
+ self.kv_lora_rank = kv_lora_rank
233
+ self.q_lora_rank = q_lora_rank
234
+ self.qk_rope_head_dim = qk_rope_head_dim
235
+ self.qk_nope_head_dim = qk_nope_head_dim
236
+ self.v_head_dim = v_head_dim
237
+
238
+ # DeepSeek related args
239
+ self.moe_layer_num_skipped = moe_layer_num_skipped
240
+ self.norm_topk_prob = norm_topk_prob
241
+ self.routed_scaling_factor = routed_scaling_factor
242
+ self.group_limited_greedy = group_limited_greedy
243
+ self.n_group = n_group
244
+ self.topk_group = topk_group
245
+ self.add_classification_head = add_classification_head
246
+ self.class_num = class_num
247
+ self.pool_type = pool_type
248
+ self.pad_id = pad_id
249
+
250
+ if self.class_num is not None:
251
+ self.dense_list = [self.hidden_size, self.class_num]
252
+
253
+ # Vit args
254
+ self.vit_path = vit_path
255
+ self.num_media_embeds = num_media_embeds
256
+ self.vit_type = vit_type
257
+ self.vit_input_resolution = vit_input_resolution
258
+ self.vit_token = vit_token
259
+ self.vit_patch = vit_patch
260
+ self.vit_mapping_type = vit_mapping_type
261
+ self.vit_norm_type = vit_norm_type
262
+ self.vit_used_rms_norm = vit_used_rms_norm
263
+ self.vit_remove_prenorm = vit_remove_prenorm
264
+ self.vit_add_patchemb_bias = vit_add_patchemb_bias
265
+ self.anyres_vit_max_image_size = anyres_vit_max_image_size
266
+ self.anyres_pooling_size = anyres_pooling_size
267
+ self.anyres_vit_two_views = anyres_vit_two_views
268
+ self.skip_cls_token = skip_cls_token
269
+ self.position_embedding_xdrope = position_embedding_xdrope
270
+ self.xdrope_section = xdrope_section
271
+
272
+ # token id
273
+ self.eod_token_id = eod_token_id
274
+ self.im_start_id = im_start_id
275
+ self.im_end_id = im_end_id
276
+ self.text_start_id = text_start_id
277
+ self.text_end_id = text_end_id
278
+ self.image_token_id = image_token_id
279
+ self.video_start_id = video_start_id
280
+ self.video_end_id = video_end_id
281
+ self.im_newline_id = im_newline_id
282
+ self.mask_init_id = mask_init_id
283
+
284
+ super().__init__(
285
+ pad_token_id=pad_token_id,
286
+ bos_token_id=bos_token_id,
287
+ eos_token_id=eos_token_id,
288
+ sep_token_id=sep_token_id,
289
+ tie_word_embeddings=tie_word_embeddings,
290
+ **kwargs,
291
+ )
292
+
293
+ def _rope_scaling_validation(self):
294
+ """
295
+ Validate the `rope_scaling` configuration.
296
+ """
297
+ if self.rope_scaling is None:
298
+ return
299
+
300
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
301
+ raise ValueError(
302
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor` or `type` and `alpha`, "
303
+ f"got {self.rope_scaling}"
304
+ )
305
+ rope_scaling_type = self.rope_scaling.get("type", None)
306
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
307
+ rope_scaling_alpha = self.rope_scaling.get("alpha", None)
308
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
309
+ raise ValueError(
310
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
311
+ )
312
+ if rope_scaling_factor is None and rope_scaling_alpha is None:
313
+ raise ValueError("`rope_scaling`'s factor or alpha field must be have one, got both of none")
314
+ if rope_scaling_factor is not None:
315
+ if not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
316
+ raise ValueError(f"`rope_scaling`'s factor field must be a float > 1.0, got {rope_scaling_factor}")
317
+ if rope_scaling_alpha is not None:
318
+ if not isinstance(rope_scaling_alpha, float) or rope_scaling_alpha <= 1.0:
319
+ raise ValueError(f"`rope_scaling`'s alpha field must be a float > 1.0, got {rope_scaling_alpha}")