dong.hyun commited on
Commit
42c6bee
·
1 Parent(s): 8186365

HyperCLOVAX-Seed-Vision-3B

Browse files
LICENSE CHANGED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ HyperCLOVA X SEED Model License Agreement
2
+
3
+ Model Release Date: April 24, 2025
4
+
5
+ This HyperCLOVA X SEED Model License Agreement (the “Agreement”) is a legal agreement between you and NAVER Corporation and NAVER Cloud Corporation (“NAVER”) and governs your use of the Models that NAVER provides to You under this Agreement.
6
+
7
+ NAVER Corp., as the holder of the intellectual property of the Model, and its affiliate, NAVER Cloud Corp., as the exclusive business operator of HyperCLOVA X, enter into this Agreement with you. NAVER and you are each a “party” and collectively the “parties.”
8
+
9
+ By using, reproducing, modifying, distributing, performing or displaying any portion or element of the Model or Derivative Model, or otherwise accepting the terms of this Agreement, you agree to be bound by this Agreement. You represent to us that you are lawfully able to enter into contracts, and if you are entering into this Agreement for an entity, that you have legal authority to bind that entity.
10
+
11
+ 1. Definitions.
12
+
13
+ 1.1. "Affiliate” means any entity directly or indirectly controlling, controlled by or under common control with either party, where “control” means the possession, directly or indirectly, of the power to independently direct or cause the direction of the management and policies of an entity, whether through ownership of more than fifty percent (50%) of the stock or other equity interests entitled to vote for representation on its board of directors, or body performing similar functions, by contract or otherwise.
14
+
15
+ 1.2. “Derivative Model” means all (i) modifications to the Model, (ii) works based on the Model, or (iii) any other machine learning model which is created by transfer of patterns of the weights, parameters, operations, or Output of the Model, to that model in order to cause that model to perform similarly to the Model, including distillation methods that use intermediate data representations or methods based on the generation of synthetic data Outputs by the Model for training that Model. For clarity, Outputs are not deemed Derivative Model.
16
+
17
+ 1.3. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
18
+
19
+ 1.4. “Model” means the foundational large language models and software and algorithms, including machine-learning model code and trained model weights distributed by NAVER.
20
+
21
+ 1.5. “Output” means the information content output of the Model or a Derivative Model that results from operating or otherwise using the Model or Derivative Models.
22
+
23
+ 2. Conditions for Use, License Grant and Restrictions
24
+
25
+ 2.1. Conditions for Use. The Model and any Derivative Model are subject to the terms of this Agreement and govern your use. If You institute copyright or patent litigation against any entity (including a crossclaim or counterclaim in a lawsuit) alleging that the Model or a Derivative Model constitutes direct or contributory copyright or patent infringement, then any license granted to you under this Agreement for that Model or Derivative Model will terminate as of the date such litigation is filed. NAVER may update this Agreement to comply with legal and regulatory requirements any time and You agree to either comply with any updated license or cease your copying, use, and distribution of the Model and any Derivative Model.
26
+
27
+ 2.2. License Grant. Subject to the terms and conditions of this Agreement, NAVER hereby grants to you a non-exclusive, worldwide, non-transferable, revocable and royalty-free limited license under NAVER’s intellectual property or other rights owned by NAVER embodied in the Model to access, download, install, copy, use, reproduce, distribute, create derivative works of, and make modifications to the Model.
28
+
29
+ 2.3. Prohibited Use Policy. NAVER is committed to safety, trust and transparency in AI development. NAVER encourages You to (i) ensure that the product or service you develop, use, offer as a service or distributes meets the legal and ethical requirements of the relevant industry or use case, (ii) take reasonable measures to address unintended bias and to mitigate harm to others, including underrepresented or vulnerable groups, and (iii) inform users of the nature and limitations of the product or service. NAVER expressly prohibits the use of its products or services for any purpose in violation of applicable law and regulation, including but not limited to (a) illegal surveillance, (b) illegal collection or processing of biometric information without the consent of the subject where required under applicable law, or (c) illegal harassment, abuse, threatening or bullying of individuals or groups of individuals or intentionally misleading or deceiving others.
30
+
31
+ 3. Redistribution.
32
+
33
+ 3.1. You may reproduce, distribute or make available the Model or Derivative Models thereof, or a product or service (including another AI model) that contains any of them, if you meet all of the following conditions: you must (i) include the Prohibited Use Policy referenced in Section 2.3. as an enforceable provision in any agreement (e.g., license agreement, terms of use, etc.) governing the use and/or distribution of the Model or Derivative Model and you must provide notice to subsequence users you distribute to the Model or Derivative Models are subject to the use restrictions in Section 2.3., (ii) provide all third party recipients of the Model or Derivative Models a copy of this Agreement, (iii) cause any modified files to carry prominent notices stating that you modified the files; (iv) include the following attribution notice within a “Notice” text file distributed as part of such copies: “HyperCLOVA X SEED Model is licensed under the HyperCLOVA X SEED Model License Agreement, Copyright © NAVER Corp. All Rights Reserved.”, and (v) prominently display “Powered by HyperCLOVA X” on a related website, user interface, blogpost, about page, or product documentation. If you use the Model or any Outputs of the Model to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “HyperCLOVA X” at the beginning of any such AI model name.
34
+ 3.2. You may add your own copyright statement to your modifications and, except as set forth in this Section, may provide additional or different license terms and conditions for use, reproduction, or distribution of your modifications, or for any such Derivative Models as a whole, provided your use, reproduction, and distribution of the Model or Derivative Models otherwise comply with the terms and conditions stated in this Agreement. Any additional or different terms and conditions you impose must not conflict with the terms of this Agreement.
35
+
36
+ 4. Additional Commercial Terms. If (i) as of the Model Release Date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s Affiliates, is greater than 10 million monthly active users in the preceding calendar month, or (ii) the Licensee or its Affiliate distributes or makes available any product or service, which is substantially similar to or directly competes with any product and service provided by NAVER, then the Licensee must request a license from NAVER. Such license may be granted by NAVER at its sole discretion, and the Licensee is not authorized to exercise any rights under this Agreement unless and until NAVER expressly grants you such rights.
37
+
38
+ 5. Generated Output. NAVER claims no rights in Outputs you generate using the Model. You and your use are solely responsible for Outputs and their subsequent uses.
39
+
40
+ 6. DISCLAIMER OF WARRANTY. UNLESS REQUIRED BY APPLICABLE LAW, THE MODEL AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OR ANY KIND, AND NAVER DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE MODEL, DERIVATIVE MODELS, OUTPUTS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE MODEL AND ANY OUTPUTS AND RESULTS AND YOUR EXERCISE OF PERMISSION UNDER THIS AGREEMENT.
41
+
42
+ 7. LIMITATION OF LIABILITY. IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, UNLESS REQUIRED BY APPLICABLE LAW (SUCH AS IN CASES OF DELIBERATE AND GROSSLY NEGLIGENT ACTS), WILL NAVER BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, OR PUNITIVE DAMAGES, OR LOST PROFITS OF ANY KIND, ARISING FROM OR RELATED TO THIS AGREEMENT, OR RESULTING FROMTHE USE OR INABILITY TO USE THE MODEL, DERIVATIVE MODELS OR, OUTPUTS (INCLUDING, BUT NOT LIMITED TO, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGES, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES), EVEN IF NAVER HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
43
+
44
+ 8. Indemnity. You will indemnify and hold harmless NAVER from and against any claim by any third party arising out of or related to your use or distribution of the Model, Derivative Model or Outputs.
45
+
46
+ 9. Intellectual Property.
47
+
48
+ 9.1. This Agreement does not grant permission to use the trade names, trademarks, service marks, or product names of NAVER, except as required for reasonable and customary use in describing the origin of the Model and reproducing the content of the “Notice” text file.
49
+
50
+ 9.2. NAVER Corp. owns the Model and any Derivative Model created by NAVER Corp. Except as expressively granted in this Agreement, NAVER Corp. reserves all rights, interests and remedies in connection with the Model and Derivative Model created by NAVER Corp. and no other license or right is granted to you by implication, estoppel or otherwise. Subject to NAVER Corp.’s ownership of the Model and any Derivative Model made by or for NAVER Corp., with respect to any derivative works and modifications of the Model that are made by you, as between you and NAVER Corp., you are and will be the owner of such derivative works and modifications.
51
+
52
+ 10. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Model and will continue in full force and effect until terminated in accordance with the terms and conditions of this Agreement. NAVER may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Model and Derivative Model. Section 5, 6, 7 and 10 shall survive the termination of this Agreement.
53
+
54
+ 11. Governing Law and Jurisdiction.
55
+
56
+ 11.1. This Agreement will be governed by and construed in accordance with the laws of the Republic of Korea, without regard to its conflicts of laws principles.
57
+
58
+ 11.2. Any disputes, controversies, or claims arising out of or relating to this Agreement, including its existence, validity, interpretation, performance, breach, or termination, shall be referred to and finally resolved by arbitration administered by the Korean Commercial Arbitration Board (KCAB) in accordance with the International Arbitration Rules of the Korean Commercial Arbitration Board in force at the time of the commencement of the arbitration. The seat of arbitration shall be Seoul, Republic of Korea. The tribunal shall consist of one arbitrator. The language of the arbitration shall be English. Either party may seek interim or provisional relief from a court of competent jurisdiction, and doing so shall not be considered a waiver of any provision in this section. The arbitral tribunal also has the authority to issue orders for interim or provisional relief.
59
+
60
+ 12. Modifications. NAVER reserves the right to modify or amend this Agreement at any time, in its sole discretion. Any modifications will be effective upon posting the updated Agreement on our website or through other means of communication. You are responsible for reviewing the Agreement periodically for changes.
61
+
62
+ 13. No Waiver. NAVER will not be treated as having waived any rights by not exercising (or delaying the exercise of) any rights under this Agreement.
__init__.py ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ from transformers import AutoConfig, AutoModel, AutoModelForCausalLM
2
+
3
+ from .configuration_hyperclovax import HCXVisionConfig
4
+ from .modeling_hyperclovax import HCXVisionForCausalLM
5
+
6
+ AutoConfig.register("hyperclovax_vlm", HCXVisionConfig)
7
+ AutoModelForCausalLM.register(HCXVisionConfig, HCXVisionForCausalLM)
config.json ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "anyres": true,
3
+ "architectures": [
4
+ "HCXVisionForCausalLM"
5
+ ],
6
+ "decoder_max_length": 16384,
7
+ "freeze_decoder": false,
8
+ "freeze_encoder": true,
9
+ "freeze_mm_projector": false,
10
+ "hidden_size": 3072,
11
+ "ignore_index": -100,
12
+ "img_start_id": 100271,
13
+ "language_config": {
14
+ "add_cross_attention": false,
15
+ "architectures": [
16
+ "LlamaForCausalLM"
17
+ ],
18
+ "attention_bias": false,
19
+ "attention_dropout": 0.0,
20
+ "bad_words_ids": null,
21
+ "begin_suppress_tokens": null,
22
+ "bos_token_id": 100257,
23
+ "chunk_size_feed_forward": 0,
24
+ "cross_attention_hidden_size": null,
25
+ "decoder_start_token_id": null,
26
+ "diversity_penalty": 0.0,
27
+ "do_sample": false,
28
+ "early_stopping": false,
29
+ "encoder_no_repeat_ngram_size": 0,
30
+ "end_token_id": 100257,
31
+ "eos_token_id": 100257,
32
+ "exponential_decay_length_penalty": null,
33
+ "finetuning_task": null,
34
+ "forced_bos_token_id": null,
35
+ "forced_eos_token_id": null,
36
+ "head_dim": 128,
37
+ "hidden_act": "silu",
38
+ "hidden_size": 3072,
39
+ "id2label": {
40
+ "0": "LABEL_0",
41
+ "1": "LABEL_1"
42
+ },
43
+ "initializer_range": 0.02,
44
+ "intermediate_size": 7168,
45
+ "is_decoder": false,
46
+ "is_encoder_decoder": false,
47
+ "label2id": {
48
+ "LABEL_0": 0,
49
+ "LABEL_1": 1
50
+ },
51
+ "length_penalty": 1.0,
52
+ "logits_scaling": 1.0,
53
+ "max_length": 20,
54
+ "max_position_embeddings": 131072,
55
+ "min_length": 0,
56
+ "mlp_bias": false,
57
+ "model_type": "llama",
58
+ "no_repeat_ngram_size": 0,
59
+ "num_attention_heads": 24,
60
+ "num_beam_groups": 1,
61
+ "num_beams": 1,
62
+ "num_hidden_layers": 32,
63
+ "num_key_value_heads": 8,
64
+ "num_return_sequences": 1,
65
+ "output_attentions": false,
66
+ "output_hidden_states": false,
67
+ "output_scores": false,
68
+ "pad_token_id": 100257,
69
+ "prefix": null,
70
+ "pretraining_tp": 1,
71
+ "problem_type": null,
72
+ "pruned_heads": {},
73
+ "remove_invalid_values": false,
74
+ "repetition_penalty": 1.0,
75
+ "resid_pdrop": 0.2,
76
+ "return_dict": true,
77
+ "return_dict_in_generate": false,
78
+ "rms_norm_eps": 1e-05,
79
+ "rope_scaling": null,
80
+ "rope_theta": 100000000,
81
+ "sep_token_id": null,
82
+ "suppress_tokens": null,
83
+ "task_specific_params": null,
84
+ "temperature": 1.0,
85
+ "tf_legacy_loss": false,
86
+ "tie_encoder_decoder": false,
87
+ "tie_word_embeddings": true,
88
+ "tokenizer_class": null,
89
+ "top_k": 50,
90
+ "top_p": 1.0,
91
+ "torch_dtype": "bfloat16",
92
+ "torchscript": false,
93
+ "transformers_version": "4.45.0",
94
+ "typical_p": 1.0,
95
+ "use_bfloat16": false,
96
+ "use_cache": true,
97
+ "vocab_size": 110592
98
+ },
99
+ "max_num_grids": 9,
100
+ "model_type": "hyperclovax_vlm",
101
+ "max_image_cnt": 12,
102
+ "num_queries_vis_abstractor": 81,
103
+ "proj_pos_emb": true,
104
+ "proj_prenorm": false,
105
+ "q_former_model_name_or_path": null,
106
+ "torch_dtype": "float32",
107
+ "transformers_version": "4.45.0",
108
+ "unpad": true,
109
+ "use_1x1_grid": true,
110
+ "use_nth_layer": -2,
111
+ "vision_config": {
112
+ "add_cross_attention": false,
113
+ "anyres": true,
114
+ "architectures": [
115
+ "SiglipVisionModel"
116
+ ],
117
+ "attention_dropout": 0.0,
118
+ "auto_map": {},
119
+ "bad_words_ids": null,
120
+ "begin_suppress_tokens": null,
121
+ "bos_token_id": null,
122
+ "chunk_size_feed_forward": 0,
123
+ "cross_attention_hidden_size": null,
124
+ "decoder_start_token_id": null,
125
+ "diversity_penalty": 0.0,
126
+ "do_sample": false,
127
+ "early_stopping": false,
128
+ "encoder_no_repeat_ngram_size": 0,
129
+ "eos_token_id": null,
130
+ "exponential_decay_length_penalty": null,
131
+ "finetuning_task": null,
132
+ "forced_bos_token_id": null,
133
+ "forced_eos_token_id": null,
134
+ "hidden_act": "gelu_pytorch_tanh",
135
+ "hidden_size": 1152,
136
+ "id2label": {
137
+ "0": "LABEL_0",
138
+ "1": "LABEL_1"
139
+ },
140
+ "image_size": 378,
141
+ "initializer_factor": 1.0,
142
+ "intermediate_size": 4304,
143
+ "is_decoder": false,
144
+ "is_encoder_decoder": false,
145
+ "label2id": {
146
+ "LABEL_0": 0,
147
+ "LABEL_1": 1
148
+ },
149
+ "layer_norm_eps": 1e-06,
150
+ "length_penalty": 1.0,
151
+ "max_length": 20,
152
+ "max_num_grids": 9,
153
+ "min_length": 0,
154
+ "model_type": "siglip_vision_model",
155
+ "no_repeat_ngram_size": 0,
156
+ "num_attention_heads": 16,
157
+ "num_beam_groups": 1,
158
+ "num_beams": 1,
159
+ "num_channels": 3,
160
+ "num_hidden_layers": 27,
161
+ "num_return_sequences": 1,
162
+ "output_attentions": false,
163
+ "output_hidden_states": false,
164
+ "output_scores": false,
165
+ "pad_token_id": null,
166
+ "patch_size": 14,
167
+ "prefix": null,
168
+ "problem_type": null,
169
+ "pruned_heads": {},
170
+ "remove_invalid_values": false,
171
+ "repetition_penalty": 1.0,
172
+ "return_dict": true,
173
+ "return_dict_in_generate": false,
174
+ "sep_token_id": null,
175
+ "suppress_tokens": null,
176
+ "task_specific_params": null,
177
+ "temperature": 1.0,
178
+ "tf_legacy_loss": false,
179
+ "tie_encoder_decoder": false,
180
+ "tie_word_embeddings": true,
181
+ "tokenizer_class": null,
182
+ "top_k": 50,
183
+ "top_p": 1.0,
184
+ "torch_dtype": "bfloat16",
185
+ "torchscript": false,
186
+ "transformers_version": "4.45.0",
187
+ "typical_p": 1.0,
188
+ "use_bfloat16": true
189
+ }
190
+ }
configuration_hyperclovax.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.configuration_utils import PretrainedConfig
2
+ from transformers.utils import logging
3
+
4
+ logger = logging.get_logger(__name__)
5
+
6
+
7
+ class HCXVisionConfig(PretrainedConfig):
8
+ model_type = "hyperclovax_vlm"
9
+ keys_to_ignore_at_inference = ["past_key_values"]
10
+
11
+ # The `gpt2` class has a different name, so it needs to be updated accordingly.
12
+ language_config_attribute_map = {
13
+ "n_embd": "hidden_size",
14
+ "n_positions": "max_position_embeddings",
15
+ "n_head": "num_attention_heads",
16
+ "n_layer": "num_hidden_layers",
17
+ }
18
+
19
+ def __init__(
20
+ self,
21
+ language_config=None,
22
+ vision_config=None,
23
+ use_nth_layer=-2,
24
+ img_start_id=100009, # <|dummy3|>
25
+ decoder_max_length=4096,
26
+ anyres=False,
27
+ unpad=False,
28
+ max_num_grids=-1,
29
+ num_queries_vis_abstractor=-1,
30
+ ignore_index=-100,
31
+ proj_pos_emb=True,
32
+ proj_prenorm=False,
33
+ use_1x1_grid=False,
34
+ **kwargs,
35
+ ):
36
+ for key, val in self.language_config_attribute_map.items():
37
+ if language_config is not None and key in language_config:
38
+ language_config[val] = language_config.pop(key)
39
+
40
+ self.language_config = language_config
41
+ self.vision_config = vision_config
42
+
43
+ if language_config is not None:
44
+ # In DeepSpeed ZeRO-3, the memory size is automatically determined based on the `hidden_size` specified in the config.
45
+ self.hidden_size = (
46
+ language_config["hidden_size"] if "hidden_size" in language_config else language_config["n_embd"]
47
+ )
48
+ # add VLM configs
49
+ self.use_nth_layer = use_nth_layer
50
+ self.decoder_max_length = decoder_max_length
51
+ self.anyres = anyres
52
+ self.unpad = unpad
53
+ self.max_num_grids = max_num_grids
54
+ self.num_queries_vis_abstractor = num_queries_vis_abstractor
55
+ self.img_start_id = img_start_id
56
+ self.ignore_index = ignore_index
57
+ self.proj_pos_emb = proj_pos_emb
58
+ self.proj_prenorm = proj_prenorm
59
+ self.use_1x1_grid = use_1x1_grid
60
+ super().__init__(**kwargs)
modeling_hyperclovax.py ADDED
@@ -0,0 +1,1810 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ast
2
+ import contextlib
3
+ import gc
4
+ import json
5
+ import math
6
+ import os
7
+ from dataclasses import dataclass
8
+ from functools import partial
9
+ from itertools import chain
10
+ from typing import Any, Dict, List, Optional, Tuple, Union
11
+
12
+ import torch
13
+ import torch.distributed as dist
14
+ import torch.nn as nn
15
+ from einops import rearrange
16
+ from timm.layers import LayerNorm, LayerNorm2d
17
+ from timm.models.regnet import RegStage
18
+ from torch.nn import CrossEntropyLoss
19
+ from transformers import (
20
+ AutoConfig,
21
+ AutoModel,
22
+ AutoModelForCausalLM,
23
+ AutoTokenizer,
24
+ PreTrainedModel,
25
+ )
26
+ from transformers.generation.utils import GenerationMixin
27
+ from transformers.integrations.deepspeed import is_deepspeed_zero3_enabled
28
+ from transformers.modeling_utils import (
29
+ is_fsdp_enabled,
30
+ is_local_dist_rank_0,
31
+ no_init_weights,
32
+ )
33
+ from transformers.models.auto import CONFIG_MAPPING
34
+ from transformers.utils import ModelOutput
35
+
36
+ from .configuration_hyperclovax import HCXVisionConfig
37
+ from .preprocessor import select_best_resolution
38
+
39
+ EOT = "<|endofturn|>"
40
+ IMG_LOC = "<|dummy3|>"
41
+
42
+
43
+ def get_rank():
44
+ if dist.is_initialized():
45
+ return dist.get_rank()
46
+ return 0
47
+
48
+
49
+ def get_world_size():
50
+ if torch.distributed.is_initialized():
51
+ world_size = torch.distributed.get_world_size()
52
+ else:
53
+ world_size = 1
54
+ return world_size
55
+
56
+
57
+ def unpad_image(tensor: torch.Tensor, original_size: Tuple[int, int]) -> torch.Tensor:
58
+ """Unpads a PyTorch tensor of a padded and resized image.
59
+
60
+ This function removes padding from a tensor image that was previously padded and resized.
61
+ The padding is removed based on the aspect ratio difference between the original and current image dimensions.
62
+
63
+ Args:
64
+ tensor: The image tensor, assumed to be in CxHxW format.
65
+ original_size: The original size of the image as (width, height).
66
+
67
+ Returns:
68
+ The unpadded image tensor.
69
+
70
+ Examples:
71
+ >>> import torch
72
+ >>> # Example 1: Unpadding with height padding
73
+ >>> padded_tensor = torch.randn(1, 64, 48) # Padded tensor (C=1, H=64, W=48)
74
+ >>> original_size = (32, 32) # Original size (width=32, height=32)
75
+ >>> unpadded_tensor = unpad_image(padded_tensor, original_size)
76
+ >>> unpadded_tensor.shape
77
+ torch.Size([1, 48, 48])
78
+ >>> # Example 2: Unpadding with width padding
79
+ >>> padded_tensor = torch.randn(1, 48, 64) # Padded tensor (C=1, H=48, W=64)
80
+ >>> original_size = (32, 32) # Original size (width=32, height=32)
81
+ >>> unpadded_tensor = unpad_image(padded_tensor, original_size)
82
+ >>> unpadded_tensor.shape
83
+ torch.Size([1, 48, 48])
84
+ """
85
+ original_width, original_height = original_size
86
+ current_height, current_width = tensor.shape[1:]
87
+
88
+ original_aspect_ratio = original_width / original_height
89
+ current_aspect_ratio = current_width / current_height
90
+
91
+ if original_aspect_ratio > current_aspect_ratio:
92
+ scale_factor = current_width / original_width
93
+ new_height = int(original_height * scale_factor)
94
+ padding = (current_height - new_height) // 2
95
+ unpadded_tensor = tensor[:, padding : current_height - padding, :]
96
+ else:
97
+ scale_factor = current_height / original_height
98
+ new_width = int(original_width * scale_factor)
99
+ padding = (current_width - new_width) // 2
100
+ unpadded_tensor = tensor[:, :, padding : current_width - padding]
101
+
102
+ return unpadded_tensor
103
+
104
+
105
+ def get_anyres_image_grid_shape(
106
+ image_size: Tuple[int, int],
107
+ grid_pinpoints: Union[str, List[Tuple[int, int]]],
108
+ patch_size: int,
109
+ ) -> Tuple[int, int]:
110
+ """Calculates the image patch grid shape after any-resolution preprocessing.
111
+
112
+ Selects the optimal resolution from predefined grid pinpoints based on input image
113
+ dimensions using `select_best_resolution`, then computes the grid layout by
114
+ dividing the selected resolution by the patch size using integer division.
115
+
116
+ Args:
117
+ image_size (Tuple[int, int]): Original image dimensions in (width, height) format.
118
+ grid_pinpoints (Union[str, List[Tuple[int, int]]]): Accepts either:
119
+ - List of (height, width) resolution tuples
120
+ - String representation of list (e.g., "[(224, 224), (336, 336)]")
121
+ patch_size (int): Spatial dimension of square patches for grid division.
122
+
123
+ Returns:
124
+ Tuple[int, int]: Grid dimensions as (num_patches_width, num_patches_height).
125
+
126
+ Examples:
127
+ >>> # Basic case with list input
128
+ >>> get_anyres_image_grid_shape((1000, 800), [(224, 224), (448, 448)], 112)
129
+ (4, 4)
130
+
131
+ >>> # Basic case with string input
132
+ >>> get_anyres_image_grid_shape((600, 400), "[(336, 336), (672, 672)]", 112)
133
+ (6, 6)
134
+
135
+ >>> # Case where resolution is not perfectly divisible by patch_size
136
+ >>> # select_best_resolution picks (224, 224). 224 // 100 = 2
137
+ >>> get_anyres_image_grid_shape((500, 500), [(224, 224)], 100)
138
+ (2, 2)
139
+
140
+ >>> # Different patch size
141
+ >>> # select_best_resolution picks (448, 448). 448 // 224 = 2
142
+ >>> get_anyres_image_grid_shape((1200, 900), [(448, 448), (224, 224)], 224)
143
+ (2, 2)
144
+
145
+ Note:
146
+ String-formatted grid_pinpoints are converted via ast.literal_eval. Invalid formats
147
+ may raise syntax exceptions. The actual resolution selection depends on the
148
+ implementation of `select_best_resolution`. The doctests assume
149
+ `select_best_resolution` picks the *first* resolution provided in `grid_pinpoints`.
150
+ """
151
+ possible_resolutions = grid_pinpoints if isinstance(grid_pinpoints, list) else ast.literal_eval(grid_pinpoints)
152
+
153
+ original_width, original_height = image_size
154
+ height, width = select_best_resolution((original_height, original_width), possible_resolutions)
155
+ return width // patch_size, height // patch_size
156
+
157
+
158
+ def reshape_and_unpad_image_features(
159
+ image_feature: torch.Tensor,
160
+ height: int,
161
+ width: int,
162
+ image_size: Tuple[int, int],
163
+ possible_resolutions: List[Tuple[int, int]],
164
+ grid_size: int,
165
+ unpad: bool,
166
+ image_newline: torch.Tensor,
167
+ ) -> torch.Tensor:
168
+ """Reshapes and processes image features with optional unpadding operation.
169
+
170
+ Processes input image features by:
171
+ 1. Separating base features from spatial features
172
+ 2. Reshaping spatial features into a 5D tensor (num_patch_height, num_patch_width, height, width, channels)
173
+ 3. Performing either unpadding operation or simple reshaping based on 'unpad' flag
174
+ 4. Concatenating processed features with base features
175
+
176
+ Args:
177
+ image_feature: Input tensor containing image features with shape
178
+ [1 + num_patches, feature_dim] where the first element is the base feature
179
+ height: Original image height in pixels
180
+ width: Original image width in pixels
181
+ image_size: Target image size as (width, height) tuple
182
+ possible_resolutions: List of possible [height, width] resolutions for multi-scale processing
183
+ grid_size: Grid dimension for patch arrangement
184
+ unpad: Flag to enable unpadding operation
185
+ image_newline: Special token tensor used as separator when unpadding
186
+
187
+ Returns:
188
+ torch.Tensor: Processed image features tensor with shape [1 + num_processed_patches, feature_dim]
189
+
190
+ Raises:
191
+ AssertionError: If base feature dimension doesn't match height*width
192
+ """
193
+ base_image_feature = image_feature[0]
194
+ image_feature = image_feature[1:]
195
+
196
+ assert (
197
+ height * width == base_image_feature.shape[0]
198
+ ), f"height: {height}, width: {width}, base_image_feature.shape[0]: {base_image_feature.shape[0]}"
199
+
200
+ num_patch_width, num_patch_height = get_anyres_image_grid_shape(image_size, possible_resolutions, grid_size)
201
+ image_feature = image_feature.view(num_patch_height, num_patch_width, height, width, -1)
202
+
203
+ if unpad:
204
+ image_feature = image_feature.permute(4, 0, 2, 1, 3).contiguous()
205
+ image_feature = image_feature.flatten(1, 2).flatten(2, 3)
206
+ image_feature = unpad_image(image_feature, image_size)
207
+ image_feature = torch.cat(
208
+ (
209
+ image_feature,
210
+ image_newline[:, None, None].expand(*image_feature.shape[:-1], 1).to(image_feature.device),
211
+ ),
212
+ dim=-1,
213
+ )
214
+ image_feature = image_feature.flatten(1, 2).transpose(0, 1)
215
+ else:
216
+ image_feature = image_feature.permute(0, 2, 1, 3, 4).contiguous()
217
+ image_feature = image_feature.flatten(0, 3)
218
+ image_feature = torch.cat((base_image_feature, image_feature), dim=0)
219
+
220
+ return image_feature
221
+
222
+
223
+ def anyres_postprocessing(
224
+ image_forward_outs: torch.FloatTensor,
225
+ split_sizes: List[int],
226
+ image_sizes: List[List[int]],
227
+ possible_resolutions: List[Tuple[int, int]],
228
+ is_videos: List[bool],
229
+ patch_size: int,
230
+ grid_size: int,
231
+ image_newline: torch.FloatTensor,
232
+ num_queries_vis_abstractor: int = -1,
233
+ unpad: bool = False,
234
+ ) -> List[torch.FloatTensor]:
235
+ """Processes 2D visual features into 1D sequences with post-processing steps.
236
+
237
+ Performs AnyRes postprocessing by flattening 2D visual features from grid partitions into 1D sequences, adding
238
+ newline embeddings at row boundaries for images, and optionally removing padding regions based on original image
239
+ sizes. For video data, processes each frame's features separately into a single sequence per video and disables
240
+ unpadding and newline insertion.
241
+
242
+ Args:
243
+ image_forward_outs (List[torch.FloatTensor]): List of input tensors with shape
244
+ (number_of_images_in_grid, total_patches, feature_dim) containing visual features.
245
+ split_sizes (List[int]): A list containing the number of patches for each sample in the batch. The sum of
246
+ `split_sizes` should equal `image_forward_outs.shape[0]`.
247
+ image_sizes (List[List[int]]): A list where each element is a list `[width, height]` representing the original
248
+ dimensions of the corresponding image sample. Used for unpadding.
249
+ possible_resolutions (List[Tuple[int, int]]): A list of supported resolution tuples `(height, width)` used by
250
+ `reshape_and_unpad_image_features` for spatial reconstruction, especially during unpadding.
251
+ is_videos (List[bool]): A list of boolean flags indicating whether each corresponding sample in the batch is a
252
+ video [`True`] or an image [`False`].
253
+ patch_size (int): The spatial dimension (height and width) of the square patches the image was divided into.
254
+ grid_size (int): The spatial dimension (height and width) of the square grid onto which patches are mapped.
255
+ `grid_size` should be divisible by `patch_size`.
256
+ image_newline (torch.FloatTensor): A learnable tensor representing the newline embedding, typically with shape
257
+ (1, feature_dim). Added after each row of image patches when not unpadding.
258
+ num_queries_vis_abstractor (int, optional): If a visual abstractor with a fixed number of output queries is used
259
+ instead of grid patching, this specifies the number of queries. Must be a perfect square if > 0.
260
+ Defaults to -1 (indicating standard grid patching is used).
261
+ unpad (bool, optional): If `True`, removes padding tokens from image features based on `image_sizes` and
262
+ `possible_resolutions`. Does not apply to video features. Defaults to False.
263
+
264
+ Returns:
265
+ List[torch.FloatTensor]: A list of tensors, where each tensor represents the processed 1D sequence of visual
266
+ features for a single sample from the input batch. The length of the sequence varies depending on processing
267
+ (unpadding, newlines, video flattening).
268
+
269
+ Raises:
270
+ AssertionError: If `num_queries_vis_abstractor` is greater than 0 but not a perfect square.
271
+ """
272
+ height = width = grid_size // patch_size
273
+
274
+ if num_queries_vis_abstractor > 0:
275
+ assert (num_queries_vis_abstractor**0.5).is_integer(), "n_queries must be square number"
276
+ height = width = int(num_queries_vis_abstractor**0.5)
277
+
278
+ image_features = torch.split(image_forward_outs, split_sizes, dim=0)
279
+
280
+ # post-processing (unpad, add newline)
281
+ new_image_features = []
282
+ for image_idx, (image_feature, is_video) in enumerate(zip(image_features, is_videos)):
283
+ if image_feature.shape[0] > 1:
284
+ if not is_video:
285
+ image_feature = reshape_and_unpad_image_features(
286
+ image_feature=image_feature,
287
+ height=height,
288
+ width=width,
289
+ image_size=image_sizes[image_idx],
290
+ possible_resolutions=possible_resolutions,
291
+ grid_size=grid_size, # Pass grid info if needed by helper
292
+ unpad=unpad,
293
+ image_newline=image_newline,
294
+ )
295
+ else:
296
+ image_feature = image_feature.flatten(0, 1)
297
+ else:
298
+ image_feature = image_feature[0]
299
+ if unpad and not is_video:
300
+ image_feature = torch.cat((image_feature, image_newline[None].to(image_feature.device)), dim=0)
301
+ new_image_features.append(image_feature)
302
+ image_features = new_image_features
303
+ return image_features
304
+
305
+
306
+ def adaptive_anyres_postprocessing(
307
+ image_forward_outs: torch.FloatTensor,
308
+ image_sizes: List[List[int]],
309
+ possible_resolutions: List[Tuple[int, int]],
310
+ is_videos: List[bool],
311
+ group_ids: List[List[int]],
312
+ num_queries_vis_abstractors: List[List[int]],
313
+ grid_size: int,
314
+ image_newline: torch.FloatTensor,
315
+ unpad: bool = False,
316
+ ) -> List[torch.FloatTensor]:
317
+ """Adaptive AnyRes postprocessing for multi-group feature aggregation.
318
+
319
+ Processes 2D visual features into 1D sequences with group-wise adaptive processing. Each image can belong to
320
+ multiple processing groups with different query configurations. Features are processed per group and aggregated
321
+ according to group_ids.
322
+
323
+ Args:
324
+ image_forward_outs (List[torch.FloatTensor]): List of input tensors with shape
325
+ (number_of_images_in_grid, total_patches, feature_dim) containing visual features.
326
+ image_sizes (List[List[int]]): Original image dimensions for each sample. [[width, height], ... ]
327
+ possible_resolutions (List[Tuple[int, int]]): Supported resolutions. [[height, width], ... ]
328
+ is_videos (List[bool]): Flags indicating video inputs
329
+ group_ids (List[List[int]]): Group indices for feature aggregation. Each group means a single grid.
330
+ num_queries_vis_abstractors (List[List[int]]): Query numbers per group
331
+ grid_size (int): Total grid size for spatial processing
332
+ image_newline (torch.FloatTensor): Sample-wise config. Newline embedding tensor
333
+ unpad (bool, optional): Sample-wise config. Enable padding removal. Defaults to False.
334
+
335
+ Returns:
336
+ List[torch.FloatTensor]: Aggregated features per group
337
+
338
+ Raises:
339
+ AssertionError: If num_queries is not square number in any group
340
+ """
341
+ # post-processing (unpad, add newline)
342
+ new_image_features = []
343
+ for image_idx, (image_feature, is_video) in enumerate(zip(image_forward_outs, is_videos)):
344
+ num_queries_vis_abstractor = num_queries_vis_abstractors[image_idx]
345
+ assert (num_queries_vis_abstractor**0.5).is_integer(), "n_queries must be square number"
346
+ height = width = int(num_queries_vis_abstractor**0.5)
347
+
348
+ if image_feature.shape[0] > 1:
349
+ if not is_video:
350
+ image_feature = reshape_and_unpad_image_features(
351
+ image_feature=image_feature,
352
+ height=height,
353
+ width=width,
354
+ image_size=image_sizes[image_idx],
355
+ possible_resolutions=possible_resolutions,
356
+ grid_size=grid_size,
357
+ unpad=unpad,
358
+ image_newline=image_newline,
359
+ )
360
+ else:
361
+ image_feature = image_feature.flatten(0, 1)
362
+ else:
363
+ image_feature = image_feature[0]
364
+ if unpad and not is_video:
365
+ image_feature = torch.cat((image_feature, image_newline[None].to(image_feature.device)), dim=0)
366
+ new_image_features.append(image_feature)
367
+
368
+ image_features = [
369
+ torch.cat([new_image_features[group_id] for group_id in group_ids_list], dim=0) for group_ids_list in group_ids
370
+ ]
371
+ return image_features
372
+
373
+
374
+ @dataclass
375
+ class HCXVisionOutput(ModelOutput):
376
+ """Output class for vision models, containing various computation results.
377
+
378
+ Args:
379
+ loss (Optional[torch.FloatTensor], optional): Total cross-entropy loss calculated from logits and labels.
380
+ loss_per_sample (Optional[torch.FloatTensor], optional): Per-sample loss values for advanced loss processing.
381
+ logits (torch.FloatTensor): Classification scores (before SoftMax) of shape (batch_size, num_classes).
382
+ past_key_values (Optional[Tuple[Tuple[torch.FloatTensor]]], optional): Contains precomputed hidden-states
383
+ that can be used (see `past_key_values` input) to speed up sequential decoding.
384
+ hidden_states (Optional[Tuple[torch.FloatTensor]], optional):
385
+ Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
386
+ shape (batch_size, sequence_length, hidden_size).
387
+ Hidden-states of the model at the output of each layer plus the initial embedding outputs.
388
+ attentions (Optional[Tuple[torch.FloatTensor]], optional): Tuple of torch.FloatTensor (one for each layer)
389
+ of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention
390
+ softmax, used to compute the weighted average in the self-attention heads.
391
+ """
392
+
393
+ loss: Optional[torch.FloatTensor] = None
394
+ loss_per_sample: Optional[torch.FloatTensor] = None
395
+ logits: torch.FloatTensor = None
396
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
397
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
398
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
399
+
400
+
401
+ class HCXVisionForCausalLM(PreTrainedModel, GenerationMixin):
402
+ """HCX Vision model for causal language modeling with vision-language capabilities.
403
+
404
+ This class combines a vision model with a language model to create a multimodal model
405
+ capable of processing images or videos and generating text based on the visual inputs.
406
+
407
+ Attributes:
408
+ config_class: Configuration class for the model.
409
+ vision_model_name: Name of the vision model component.
410
+ _no_split_modules: List of modules that should not be split during parallel processing.
411
+ supports_gradient_checkpointing: Whether the model supports gradient checkpointing.
412
+ _skip_keys_device_placement: Keys to skip during device placement.
413
+ """
414
+
415
+ config_class = HCXVisionConfig
416
+ vision_model_name = "vision_model"
417
+ _no_split_modules = ["CLIPAttention", "SiglipVisionModel"]
418
+ supports_gradient_checkpointing = True
419
+ _skip_keys_device_placement = "past_key_values"
420
+
421
+ def __init__(
422
+ self,
423
+ config: HCXVisionConfig,
424
+ **kwargs: Optional[Any],
425
+ ) -> None:
426
+ """Initialize the HCXVisionForCausalLM model.
427
+
428
+ Args:
429
+ config: Configuration object for the model containing parameters for both
430
+ vision and language components.
431
+ **kwargs: Additional keyword arguments:
432
+ - use_liger: Whether to use liger kernel for hyperclovax models.
433
+ - use_fused_ce: Whether to use fused cross-entropy loss.
434
+ - use_sum_loss: Whether to use sum reduction for loss instead of mean.
435
+ - is_safetensor_save: Whether to save model using safetensors format.
436
+
437
+ Raises:
438
+ ValueError: If vision_config is not defined or if language_config is not defined.
439
+ """
440
+ super().__init__(config)
441
+
442
+ self.flag_changed_max_position_embeddings = False
443
+
444
+ vision_model_type = config.vision_config["model_type"]
445
+ if vision_model_type in CONFIG_MAPPING:
446
+ vision_config = CONFIG_MAPPING[vision_model_type](**config.vision_config)
447
+ vision_config.auto_map = {}
448
+ else:
449
+ if config.vision_model_name_or_path is not None:
450
+ vision_config = AutoConfig.from_pretrained(config.vision_model_name_or_path, trust_remote_code=True)
451
+ elif config.vision_config["_name_or_path"] is not None:
452
+ vision_config = AutoConfig.from_pretrained(
453
+ config.vision_config["_name_or_path"], trust_remote_code=True
454
+ )
455
+ else:
456
+ raise ValueError("vision_config is not defined")
457
+
458
+ self.use_liger = kwargs.pop("use_liger", False)
459
+ self.use_fused_ce = kwargs.pop("use_fused_ce", False)
460
+ self.reduction = "sum" if kwargs.pop("use_sum_loss", False) else "mean"
461
+
462
+ self.vision_config = vision_config
463
+ vision_config.anyres = config.anyres
464
+ vision_config.max_num_grids = config.max_num_grids
465
+
466
+ possible_resolutions = []
467
+ if config.anyres:
468
+ assert config.max_num_grids > 0
469
+ for i in range(1, config.max_num_grids + 1):
470
+ for j in range(1, config.max_num_grids + 1):
471
+ if i == 1 and j == 1 and not config.use_1x1_grid:
472
+ continue
473
+ if i * j <= config.max_num_grids:
474
+ possible_resolutions.append([i, j])
475
+
476
+ possible_resolutions = [
477
+ [ys * vision_config.image_size, xs * vision_config.image_size] for ys, xs in possible_resolutions
478
+ ]
479
+
480
+ self.possible_resolutions = possible_resolutions
481
+
482
+ with no_init_weights():
483
+ self.vision_model = AutoModel.from_config(
484
+ vision_config, trust_remote_code=True
485
+ ) # weight will be loaded in from_pretrained
486
+
487
+ assert config.language_config["model_type"] == "llama"
488
+ language_config = CONFIG_MAPPING["llama"](**config.language_config)
489
+ language_config._attn_implementation = kwargs.get("attn_implementation", "sdpa") # activate flash attention
490
+ language_config.logits_scaling = 1.0
491
+
492
+ self.language_config = language_config
493
+ self.language_model = AutoModelForCausalLM.from_config(language_config)
494
+
495
+ self.language_model.gradient_checkpointing_enable()
496
+ self.num_queries_vis_abstractor = config.num_queries_vis_abstractor
497
+
498
+ # mm_projctor(==connector); vision_model_hidden_size -> LLM embedding size
499
+ input_hidden_size = vision_config.hidden_size
500
+ self.mm_projector = HCXVisionCAbstractor(
501
+ num_queries=self.num_queries_vis_abstractor,
502
+ num_input_tokens=(self.vision_config.image_size // self.vision_config.patch_size) ** 2,
503
+ encoder_hidden_size=input_hidden_size,
504
+ hidden_size=input_hidden_size,
505
+ output_hidden_size=language_config.hidden_size,
506
+ pos_emb=config.proj_pos_emb,
507
+ prenorm=config.proj_prenorm,
508
+ )
509
+ self.use_nth_layer = config.use_nth_layer
510
+ self.config.update({"vision_config": self.vision_model.config.to_dict()})
511
+ self.config.update({"language_config": self.language_model.config.to_dict()})
512
+ self.lm_head_vocab_size = (
513
+ language_config.padded_vocab_size
514
+ if hasattr(language_config, "padded_vocab_size")
515
+ else language_config.vocab_size
516
+ )
517
+ self.language_model.lm_head = nn.Linear(language_config.hidden_size, self.lm_head_vocab_size, bias=False)
518
+ self.model_parallel = False
519
+ self.device_map = None
520
+ self.use_no_grad = None
521
+ self.decoder_max_length = config.decoder_max_length
522
+
523
+ self.anyres = config.anyres
524
+ self.unpad = config.unpad
525
+ if self.anyres:
526
+ self.image_newline = nn.Parameter(torch.empty(language_config.hidden_size, dtype=self.dtype))
527
+
528
+ self.is_safetensor_save = kwargs.get("is_safetensor_save", True)
529
+ self._backward_compatibility_gradient_checkpointing()
530
+
531
+ def _init_weights(self, module):
532
+ # copies from https://github.com/kakaobrain/honeybee/blob/main/honeybee/common_layers.py#L55
533
+ if (
534
+ isinstance(module, nn.Conv2d) # noqa: SIM101
535
+ or isinstance(module, nn.Embedding)
536
+ or isinstance(module, nn.Linear)
537
+ ):
538
+ module.weight.data.normal_(mean=0.0, std=0.02)
539
+ if hasattr(module, "bias") and module.bias is not None:
540
+ module.bias.data.zero_()
541
+
542
+ elif isinstance(module, nn.LayerNorm):
543
+ module.bias.data.zero_()
544
+ module.weight.data.fill_(1.0)
545
+ elif isinstance(module, nn.Parameter):
546
+ embed_std = 1 / torch.sqrt(torch.tensor(module.size(0), dtype=torch.float)).to(module.dtype)
547
+ module.data.normal_(mean=0.0, std=embed_std)
548
+
549
+ def forward(
550
+ self,
551
+ input_ids: Optional[torch.LongTensor] = None,
552
+ pixel_values: Optional[List[List[torch.FloatTensor]]] = None,
553
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
554
+ attention_mask: Optional[torch.FloatTensor] = None,
555
+ inputs_embeds: Optional[torch.FloatTensor] = None,
556
+ labels: Optional[torch.LongTensor] = None,
557
+ use_cache: Optional[bool] = None,
558
+ output_attentions: Optional[bool] = None,
559
+ output_hidden_states: Optional[bool] = None,
560
+ return_dict: Optional[bool] = None,
561
+ image_sizes: Optional[List[List[List[int]]]] = None,
562
+ vision_query_lengths: Optional[List[List[int]]] = None,
563
+ non_vision_query_lengths: Optional[List[int]] = None,
564
+ img_start_ids_list: Optional[List[List[int]]] = None,
565
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
566
+ num_queries_vis_abstractors_slow: Optional[List[List[int]]] = None,
567
+ first_last_frames_slows: Optional[List[bool]] = None,
568
+ is_video_list: Optional[List[bool]] = None,
569
+ **kwargs,
570
+ ) -> Union[Tuple, HCXVisionOutput]:
571
+ """Forward pass of the model.
572
+
573
+ This method processes the input tokens and images, combines them into a unified
574
+ representation, and generates text output based on the inputs.
575
+
576
+ Args:
577
+ input_ids: Input token IDs. In positions where images are inputted, the value is replaced by "<|dummy3|>"
578
+ pixel_values: List of lists of 4D tensors for images. Each outer list corresponds to a batch and contains
579
+ inner lists of image tensors.
580
+ past_key_values: Pre-computed key and value states of the attention layers for faster inference.
581
+ attention_mask: Mask to avoid performing attention on padding token indices.
582
+ inputs_embeds: Input embeddings. If provided, input_ids will not be used.
583
+ labels: Labels for computing the language modeling loss.
584
+ use_cache: Whether to use past key/values for faster inference.
585
+ output_attentions: Whether to return attention weights of each layer.
586
+ output_hidden_states: Whether to return hidden states of each layer.
587
+ return_dict: Whether to return a ModelOutput instead of a tuple.
588
+ image_sizes: List of lists representing image dimensions (width, height).
589
+ vision_query_lengths: List of lists containing lengths when each image is converted into visual tokens.
590
+ non_vision_query_lengths: List of lengths of text tokens (excluding visual tokens) for each sample.
591
+ img_start_ids_list: List of lists containing indices of img_start_id tokens for each sample.
592
+ num_queries_vis_abstractors: List of lists containing number of visual tokens for each image grid.\
593
+ For video frames, this is the number of visual tokens for the fast part.
594
+ num_queries_vis_abstractors_slow: List of lists containing number of visual tokens for
595
+ the slow part when applying the slowfast algorithm to video frames.
596
+ first_last_frames_slows: List of booleans indicating whether the slowfast algorithm is
597
+ applied to the first or last frames of the video.
598
+ is_video_list: List of booleans indicating which inputs are videos.
599
+ **kwargs: Additional keyword arguments.
600
+
601
+ Returns:
602
+ If return_dict=True, returns an HCXVisionOutput object containing:
603
+ - loss: Language modeling loss if labels are provided, otherwise None.
604
+ - loss_per_sample: Per-sample loss if labels are provided, otherwise None.
605
+ - logits: Prediction scores of the language modeling head.
606
+ - past_key_values: Past key/values for faster inference if use_cache=True.
607
+ - hidden_states: Hidden states of all layers if output_hidden_states=True.
608
+ - attentions: Attention weights of all layers if output_attentions=True.
609
+ If return_dict=False, returns a tuple containing the above items except loss_per_sample.
610
+ """
611
+ output_attentions = (
612
+ output_attentions if output_attentions is not None else self.config.vision_config["output_attentions"]
613
+ )
614
+ output_hidden_states = (
615
+ output_hidden_states
616
+ if output_hidden_states is not None
617
+ else self.config.vision_config["output_hidden_states"]
618
+ )
619
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
620
+
621
+ if inputs_embeds is None and past_key_values is None:
622
+ inputs_embeds = self.extract_inputs_embeds(
623
+ input_ids=input_ids,
624
+ pixel_values=pixel_values,
625
+ past_key_values=past_key_values,
626
+ image_sizes=image_sizes,
627
+ vision_query_lengths=vision_query_lengths,
628
+ non_vision_query_lengths=non_vision_query_lengths,
629
+ img_start_ids_list=img_start_ids_list,
630
+ num_queries_vis_abstractors=num_queries_vis_abstractors,
631
+ num_queries_vis_abstractors_slow=num_queries_vis_abstractors_slow,
632
+ first_last_frames_slows=first_last_frames_slows,
633
+ is_videos=is_video_list,
634
+ )
635
+
636
+ if inputs_embeds is not None:
637
+ input_ids = None
638
+
639
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
640
+ outputs = self.language_model.base_model(
641
+ input_ids=input_ids,
642
+ inputs_embeds=inputs_embeds,
643
+ attention_mask=attention_mask,
644
+ past_key_values=past_key_values,
645
+ use_cache=use_cache,
646
+ output_attentions=output_attentions,
647
+ output_hidden_states=output_hidden_states,
648
+ return_dict=return_dict,
649
+ )
650
+
651
+ hidden_states = outputs[0]
652
+ hidden_states = hidden_states * self.language_config.logits_scaling
653
+
654
+ loss = None
655
+ loss_per_sample = None
656
+ logits = self.language_model.lm_head(hidden_states)
657
+ if labels is not None:
658
+ # Shift so that tokens < n predict n
659
+ shift_logits = logits[..., :-1, :].contiguous()
660
+ shift_labels = labels[..., 1:].contiguous()
661
+ # Flatten the tokens
662
+ loss_fct = CrossEntropyLoss(reduction="none") # ignore IGNORE_INDEX(-100)
663
+ shift_logits = shift_logits.view(-1, self.lm_head_vocab_size)
664
+ shift_labels = shift_labels.view(-1)
665
+ # Enable model/pipeline parallelism
666
+ shift_labels = shift_labels.to(shift_logits.device)
667
+ loss = loss_fct(shift_logits, shift_labels)
668
+ if get_rank() == 0:
669
+ loss_per_sample = loss.view(logits.shape[0], -1).sum(axis=1) / (
670
+ shift_labels.view(logits.shape[0], -1) != self.config.ignore_index
671
+ ).sum(axis=1)
672
+ loss = loss[shift_labels != self.config.ignore_index].mean()
673
+ if not return_dict:
674
+ output = (logits,) + outputs[1:]
675
+ return (loss,) + output if loss is not None else output
676
+
677
+ return HCXVisionOutput(
678
+ loss=loss,
679
+ loss_per_sample=loss_per_sample,
680
+ logits=logits,
681
+ past_key_values=outputs.past_key_values,
682
+ hidden_states=outputs.hidden_states,
683
+ attentions=outputs.attentions,
684
+ )
685
+
686
+ def determine_non_vision_query_lengths(
687
+ self, input_ids: torch.LongTensor, pad_id: int, img_start_id: int
688
+ ) -> List[int]:
689
+ """Calculate the lengths of non-vision query parts in the input.
690
+
691
+ This method calculates the length of text tokens (excluding visual tokens) for each sample.
692
+ When input_ids are collated, they are padded with pad_id on the right, so this method finds
693
+ these values by identifying pad tokens and img_start_id tokens.
694
+
695
+ Args:
696
+ input_ids: Input token IDs with img_start_id markers for image positions.
697
+ pad_id: Token ID used for padding.
698
+ img_start_id: Token ID marking the start of image data.
699
+
700
+ Returns:
701
+ List of lengths of non-vision query parts for each sample in the batch.
702
+ """
703
+ non_vision_query_lengths = []
704
+ batch_size, len_seq = input_ids.size(0), input_ids.size(1)
705
+
706
+ for i in range(batch_size):
707
+ temp_idx = (input_ids[i] == pad_id).nonzero()
708
+ eos_idx = temp_idx[0, 0].item() if len(temp_idx) > 0 else len_seq
709
+ num_imgs = (input_ids[i] == img_start_id).sum().item()
710
+ non_vision_query_lengths.append(eos_idx - num_imgs)
711
+
712
+ if all([pad_id in input_id for input_id in input_ids.tolist()]):
713
+ non_vision_query_lengths = [
714
+ non_vision_query_length + 1 for non_vision_query_length in non_vision_query_lengths
715
+ ]
716
+
717
+ return non_vision_query_lengths
718
+
719
+ def determine_vision_query_lengths(
720
+ self, image_features: List[List[torch.Tensor]], image_cnts: List[int]
721
+ ) -> List[List[int]]:
722
+ """Calculate the lengths of vision query parts in the input.
723
+
724
+ This method calculates the lengths of visual tokens for each image in each sample based on
725
+ the shapes of image feature tensors. For samples without any images, a dummy image is included
726
+ but then converted to an empty list.
727
+
728
+ Args:
729
+ image_features: List of lists of image features tensors.
730
+ image_cnts: List of counts of images for each sample in the batch.
731
+
732
+ Returns:
733
+ List of lists of lengths of visual tokens for each image in each sample.
734
+ """
735
+ vision_query_lengths = [
736
+ [image_feature.size(0) for image_feature in image_feature_list] for image_feature_list in image_features
737
+ ]
738
+
739
+ for i, image_cnt in enumerate(image_cnts):
740
+ if image_cnt == 0:
741
+ assert len(vision_query_lengths[i]) == 1 # 현재 검정 이미지 1개 들어가있음
742
+ vision_query_lengths[i] = [] # 빈 list 로 변환
743
+
744
+ return vision_query_lengths
745
+
746
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.get_input_embeddings
747
+ def get_input_embeddings(self):
748
+ return self.language_model.get_input_embeddings()
749
+
750
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.set_input_embeddings
751
+ def set_input_embeddings(self, value):
752
+ self.language_model.set_input_embeddings(value)
753
+
754
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.get_output_embeddings
755
+ def get_output_embeddings(self):
756
+ return self.language_model.get_output_embeddings()
757
+
758
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.set_output_embeddings
759
+ def set_output_embeddings(self, new_embeddings):
760
+ self.language_model.set_output_embeddings(new_embeddings)
761
+
762
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.set_decoder
763
+ def set_decoder(self, decoder):
764
+ self.language_model.set_decoder(decoder)
765
+
766
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.get_decoder
767
+ def get_decoder(self):
768
+ return self.language_model.get_decoder()
769
+
770
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.tie_weights
771
+ def tie_weights(self):
772
+ return self.language_model.tie_weights()
773
+
774
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.resize_token_embeddings
775
+ def resize_token_embeddings(self, new_num_tokens: Optional[int] = None, pad_to_multiple_of=None) -> nn.Embedding:
776
+ model_embeds = self.language_model.resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
777
+ self.config.text_config.vocab_size = model_embeds.num_embeddings
778
+ self.vocab_size = model_embeds.num_embeddings
779
+ return model_embeds
780
+
781
+ def extract_inputs_embeds(
782
+ self,
783
+ input_ids: Optional[torch.LongTensor] = None,
784
+ pixel_values: Optional[List[List[torch.FloatTensor]]] = None, # list of list of 4D tensors
785
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
786
+ image_sizes: Optional[List[List[List[int]]]] = None,
787
+ vision_query_lengths: Optional[List[List[int]]] = None,
788
+ non_vision_query_lengths: Optional[List[int]] = None,
789
+ img_start_ids_list: Optional[List[List[int]]] = None,
790
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
791
+ num_queries_vis_abstractors_slow: Optional[List[List[int]]] = None,
792
+ first_last_frames_slows: Optional[List[bool]] = None,
793
+ is_videos: Optional[List[str]] = None,
794
+ ):
795
+ """Extract input embeddings by processing text tokens and visual features.
796
+
797
+ This method processes the input tokens and image features, extracts the visual features
798
+ using the vision model, and combines them with the text token embeddings to create
799
+ a unified input representation for the language model.
800
+
801
+ Args:
802
+ input_ids: Input token IDs with img_start_id markers for image positions.
803
+ pixel_values: List of lists of image tensors.
804
+ past_key_values: Pre-computed key and value states for faster inference.
805
+ image_sizes: List of lists of image dimensions (width, height).
806
+ vision_query_lengths: List of lists of lengths when each image is converted to visual tokens.
807
+ non_vision_query_lengths: List of lengths of text tokens (excluding visual tokens) for each sample.
808
+ img_start_ids_list: List of lists containing indices of img_start_id tokens for each sample.
809
+ num_queries_vis_abstractors: List of lists containing number of visual tokens for each image grid.
810
+ num_queries_vis_abstractors_slow: List of lists containing number of visual tokens for
811
+ the slow part when applying the slowfast algorithm to video frames.
812
+ first_last_frames_slows: List of booleans indicating whether the slowfast algorithm is
813
+ applied to the first or last frames of the video.
814
+ is_videos: List of booleans indicating which inputs are videos.
815
+
816
+ Returns:
817
+ Combined embeddings of text tokens and visual features.
818
+ """
819
+ inputs_embeds = None
820
+ if past_key_values:
821
+ pass
822
+ else:
823
+ # Flatten CLIP and connector for feature encoding, then convert back to List of List format
824
+ len_pixel_values = [len(pixel_value) for pixel_value in pixel_values]
825
+ concat_pixel_values = torch.cat(list(chain(*pixel_values)), dim=0) # list of list of 4D Tensor
826
+ visual_token_idx = 0 if "siglip" in self.vision_config.model_type else 1
827
+ # Check if all parameters of the model require_grad=False
828
+ if self.use_no_grad is None:
829
+ self.use_no_grad = all(not p.requires_grad for p in self.vision_model.vision_model.encoder.parameters())
830
+ context = torch.no_grad() if self.use_no_grad else contextlib.nullcontext()
831
+ with context:
832
+ if self.use_no_grad:
833
+ # Fixed number of for-loop iterations to 10.
834
+ # Currently no memory effect observed, so proceeding without chunking.
835
+ n_chunks = 1
836
+ else:
837
+ n_chunks = 1
838
+ total_len = concat_pixel_values.size(0)
839
+ # Calculate the size of each chunk based on total data length (divided into 10 chunks)
840
+ chunk_size = math.ceil(total_len / n_chunks) if total_len > 0 else 1
841
+ image_forward_outs_chunks = []
842
+
843
+ for i in range(n_chunks):
844
+ start = i * chunk_size
845
+ end = (i + 1) * chunk_size
846
+ # Current chunk slice (could be an empty tensor if there's no data)
847
+ chunk = concat_pixel_values[start:end].to(self.vision_model.dtype)
848
+ # If the current chunk size is smaller than chunk_size, pad with dummy data
849
+ if chunk.size(0) < chunk_size:
850
+ # print(f"chunk.size(0): {chunk.size(0)}, chunk_size: {chunk_size}")
851
+ pad_size = chunk_size - chunk.size(0)
852
+ # Create dummy tensor based on concat_pixel_values shape
853
+ dummy_shape = (pad_size,) + tuple(concat_pixel_values.shape[1:])
854
+ dummy = torch.zeros(
855
+ dummy_shape,
856
+ dtype=concat_pixel_values.dtype,
857
+ device=concat_pixel_values.device,
858
+ )
859
+ chunk = torch.cat([chunk, dummy], dim=0)
860
+
861
+ # Pass the chunk through the vision model (processed according to use_nth_layer)
862
+ if self.use_nth_layer == -1:
863
+ # Replace post_layernorm of the last layer with Identity
864
+ self.vision_model.vision_model.post_layernorm = nn.Identity()
865
+ outs = self.vision_model(chunk)
866
+ outs = outs.last_hidden_state[:, visual_token_idx:]
867
+ else:
868
+ outs = self.vision_model(chunk, output_hidden_states=True)
869
+ outs = outs.hidden_states[self.use_nth_layer][:, visual_token_idx:]
870
+ image_forward_outs_chunks.append(outs)
871
+
872
+ # Concatenate results from all chunks
873
+ image_forward_outs = torch.cat(image_forward_outs_chunks, dim=0).to(image_forward_outs_chunks[0].dtype)
874
+
875
+ if num_queries_vis_abstractors is None:
876
+ assert num_queries_vis_abstractors_slow is None
877
+ image_sizes = list(chain(*image_sizes))
878
+ if is_videos is not None:
879
+ is_videos = list(chain(*is_videos))
880
+ group_ids = None
881
+ image_forward_outs = image_forward_outs.to(dtype=self.mm_projector.dtype)
882
+ image_forward_outs = self.mm_projector(image_forward_outs)
883
+ else:
884
+ # adaptive anyres is only implemented in HCXVisionCAbstractor
885
+ assert isinstance(self.mm_projector, HCXVisionCAbstractor)
886
+
887
+ (
888
+ num_queries_vis_abstractors,
889
+ num_grids,
890
+ image_sizes,
891
+ is_videos,
892
+ group_ids,
893
+ ) = self.compute_adaptive_params(
894
+ pixel_values,
895
+ num_queries_vis_abstractors,
896
+ num_queries_vis_abstractors_slow,
897
+ image_sizes,
898
+ is_videos,
899
+ first_last_frames_slows,
900
+ )
901
+
902
+ image_forward_outs = image_forward_outs.to(dtype=self.mm_projector.dtype)
903
+ image_forward_outs = self.mm_projector(
904
+ image_forward_outs,
905
+ num_queries_vis_abstractors=num_queries_vis_abstractors,
906
+ num_grids=num_grids,
907
+ )
908
+
909
+ if self.anyres:
910
+ split_sizes = [pixel_value.shape[0] for pixel_value in chain(*pixel_values)]
911
+
912
+ if num_queries_vis_abstractors is None:
913
+ image_features = anyres_postprocessing(
914
+ image_forward_outs=image_forward_outs,
915
+ split_sizes=split_sizes,
916
+ image_sizes=image_sizes,
917
+ num_queries_vis_abstractor=self.num_queries_vis_abstractor,
918
+ unpad=self.unpad,
919
+ is_videos=is_videos,
920
+ patch_size=self.vision_model.config.patch_size,
921
+ grid_size=self.vision_model.config.image_size,
922
+ image_newline=self.image_newline,
923
+ possible_resolutions=self.possible_resolutions,
924
+ )
925
+ else:
926
+ image_features = adaptive_anyres_postprocessing(
927
+ image_forward_outs=image_forward_outs,
928
+ image_sizes=image_sizes,
929
+ num_queries_vis_abstractors=num_queries_vis_abstractors,
930
+ unpad=self.unpad,
931
+ is_videos=is_videos,
932
+ grid_size=self.vision_model.config.image_size,
933
+ image_newline=self.image_newline,
934
+ possible_resolutions=self.possible_resolutions,
935
+ group_ids=group_ids,
936
+ )
937
+ else:
938
+ if num_queries_vis_abstractors is None:
939
+ image_features = [image_forward_out for image_forward_out in image_forward_outs]
940
+ else:
941
+ image_features = [image_forward_out.unsqueeze(0) for image_forward_out in image_forward_outs]
942
+
943
+ # print(f"BEFORE GROUPING: len(image_features): {len(image_features)}")
944
+ image_features = [
945
+ image_features[sum(len_pixel_values[:i]) : sum(len_pixel_values[: i + 1])]
946
+ for i in range(len(len_pixel_values))
947
+ ]
948
+
949
+ batch_size = input_ids.size(0)
950
+ image_feature_dim = image_features[0][0].size(1)
951
+ image_feature_dtype = image_features[0][0].dtype
952
+
953
+ if img_start_ids_list is None:
954
+ image_cnts = (input_ids == self.config.img_start_id).sum(dim=1).tolist()
955
+ else:
956
+ image_cnts = [len(img_start_ids) for img_start_ids in img_start_ids_list]
957
+
958
+ if non_vision_query_lengths is None:
959
+ non_vision_query_lengths = self.determine_non_vision_query_lengths(
960
+ input_ids, self.tokenizer.pad_token_id, self.config.img_start_id
961
+ )
962
+
963
+ if vision_query_lengths is None:
964
+ vision_query_lengths = self.determine_vision_query_lengths(image_features, image_cnts)
965
+
966
+ # Slicing is faster than concatenation
967
+ len_inputs_embeds = max(
968
+ [
969
+ sum(vision_query_length) + non_vision_query_length
970
+ for non_vision_query_length, vision_query_length in zip(
971
+ non_vision_query_lengths, vision_query_lengths
972
+ )
973
+ ]
974
+ )
975
+ len_inputs_embeds = min(self.decoder_max_length, len_inputs_embeds)
976
+
977
+ inputs_embeds = torch.zeros(
978
+ [batch_size, len_inputs_embeds, image_feature_dim],
979
+ dtype=image_feature_dtype,
980
+ device=self.device,
981
+ requires_grad=True,
982
+ ).clone()
983
+ # temp_embeds : torch.bfloat16 : [batchsize, 174, 3072]
984
+ temp_embeds = self.get_input_embeddings()(input_ids)
985
+
986
+ # The complete format is <PROMPT><USER_PREFIX><VISION_QUERIES>Sentence
987
+ for batch_idx, sample in enumerate(input_ids):
988
+ # Concatenate with visual tokens and then slice
989
+ non_vision_query_length = non_vision_query_lengths[batch_idx]
990
+ # Safely concatenate with visual tokens and then slice
991
+ sample = sample[: non_vision_query_length + image_cnts[batch_idx]]
992
+
993
+ if image_cnts[batch_idx] == 0: # Text instruction data doesn't insert image features
994
+ temp_idx = 0
995
+ # Reference: https://github.com/haotian-liu/LLaVA/commit/44e0562f9497fb79f042427307472a87d266d90a#diff-4477387d506ccb1897a13972cba26c9da3fad4d3e1c32ec4b8bd8ff7acd3f292
996
+ # https://github.com/intel/intel-extension-for-transformers/issues/1201#issuecomment-1915875119
997
+ inputs_embeds[batch_idx, :non_vision_query_length] = temp_embeds[batch_idx][
998
+ :non_vision_query_length
999
+ ]
1000
+ inputs_embeds[batch_idx, temp_idx:temp_idx] = image_features[batch_idx][0][
1001
+ 0:0
1002
+ ] # First image of batch_idx sample (dummy image)
1003
+ else:
1004
+ if img_start_ids_list is None:
1005
+ img_start_ids = (sample == self.config.img_start_id).nonzero()
1006
+ else:
1007
+ img_start_ids = img_start_ids_list[batch_idx]
1008
+ assert len(img_start_ids) == image_cnts[batch_idx] == len(image_features[batch_idx])
1009
+ # Initialize starting points for input embeddings and temporary embeddings
1010
+ input_start, temp_start = 0, 0
1011
+
1012
+ # Iterate through each image starting point in the batch
1013
+ for multi_img_idx, img_start_idx in enumerate(img_start_ids):
1014
+ # Calculate token length up to the current image starting point
1015
+ token_len = img_start_idx - temp_start
1016
+
1017
+ # Copy tokens to inputs_embeds
1018
+ inputs_embeds[batch_idx, input_start : input_start + token_len] = temp_embeds[
1019
+ batch_idx, temp_start : temp_start + token_len
1020
+ ]
1021
+
1022
+ inputs_embeds[
1023
+ batch_idx,
1024
+ input_start
1025
+ + token_len : input_start
1026
+ + token_len
1027
+ + vision_query_lengths[batch_idx][multi_img_idx],
1028
+ ] = image_features[batch_idx][multi_img_idx]
1029
+
1030
+ # Update starting points for next token processing
1031
+ input_start += token_len + vision_query_lengths[batch_idx][multi_img_idx]
1032
+ temp_start += token_len + 1 # Increase by 1 to skip the image start token
1033
+
1034
+ # Process tokens after the last image end token
1035
+ token_len = min(sample[temp_start:].size(0), inputs_embeds.size(1) - input_start)
1036
+ inputs_embeds[batch_idx, input_start : input_start + token_len] = temp_embeds[
1037
+ batch_idx, temp_start : temp_start + token_len
1038
+ ]
1039
+ return inputs_embeds
1040
+
1041
+ @torch.no_grad()
1042
+ def generate(
1043
+ self,
1044
+ input_ids: Optional[torch.LongTensor] = None,
1045
+ pixel_values: Optional[List[List[torch.FloatTensor]]] = None,
1046
+ image_sizes: Optional[List[List[List[int]]]] = None,
1047
+ vision_query_lengths: Optional[List[List[int]]] = None,
1048
+ non_vision_query_lengths: Optional[List[int]] = None,
1049
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1050
+ num_queries_vis_abstractors_slow: Optional[List[List[int]]] = None,
1051
+ first_last_frames_slows: Optional[List[bool]] = None,
1052
+ is_videos: Optional[List[bool]] = None,
1053
+ img_start_ids_list: Optional[List[List[int]]] = None,
1054
+ pad_token_id: Optional[int] = None,
1055
+ eos_token_id: Optional[int] = None,
1056
+ bad_words_ids: Optional[List[List[int]]] = None,
1057
+ max_length: int = 196,
1058
+ min_length: int = 2,
1059
+ do_sample: bool = True,
1060
+ num_beams: int = 1,
1061
+ top_p: float = 0.6,
1062
+ top_k: int = 0,
1063
+ temperature: float = 0.5,
1064
+ repetition_penalty: float = 1.0,
1065
+ length_penalty: int = 1,
1066
+ use_cache: bool = True,
1067
+ **kwargs,
1068
+ ) -> torch.LongTensor:
1069
+ """Generate text based on input tokens and images.
1070
+
1071
+ This method generates text based on the provided input tokens and images using
1072
+ beam search and/or sampling strategies.
1073
+
1074
+ Args:
1075
+ input_ids: Input token IDs with img_start_id markers for image positions.
1076
+ pixel_values: List of lists of image tensors.
1077
+ image_sizes: List of lists of image dimensions (width, height).
1078
+ vision_query_lengths: List of lists of lengths when each image is converted to visual tokens.
1079
+ non_vision_query_lengths: List of lengths of text tokens (excluding visual tokens) for each sample.
1080
+ num_queries_vis_abstractors: List of lists containing number of visual tokens for each image grid.
1081
+ num_queries_vis_abstractors_slow: List of lists containing number of visual tokens for the slow part when
1082
+ applying the slowfast algorithm to video frames.
1083
+ first_last_frames_slows: List of booleans indicating whether the slowfast algorithm is applied to the first
1084
+ or last frames of the video.
1085
+ is_videos: List of booleans indicating which inputs are videos.
1086
+ img_start_ids_list: List of lists containing indices of img_start_id tokens for each sample.
1087
+ pad_token_id: Token ID used for padding.
1088
+ eos_token_id: Token ID used to signal the end of a sequence.
1089
+ bad_words_ids: List of token ID sequences that should not be generated.
1090
+ max_length: Maximum length of the sequence to be generated (input length + max_new_tokens).
1091
+ min_length: Minimum length of the sequence to be generated (input length + min_new_tokens).
1092
+ do_sample: Whether to use sampling for generation (otherwise uses greedy decoding).
1093
+ num_beams: Number of beams for beam search. 1 means no beam search.
1094
+ top_p: Nucleus sampling parameter. Tokens with cumulative probability > top_p are kept.
1095
+ top_k: Number of highest probability tokens to keep for top-k-filtering.
1096
+ temperature: Value used to modulate the next token probabilities.
1097
+ repetition_penalty: Penalty applied to tokens that have already appeared in the sequence.
1098
+ length_penalty: Exponential penalty applied to sequence length.
1099
+ use_cache: Whether to use past key/values for faster inference.
1100
+ **kwargs: Additional keyword arguments.
1101
+
1102
+ Returns:
1103
+ Generated token IDs.
1104
+ """
1105
+ # inputs_embeds: torch.bfloat16 : [batchsize, variable(visual token, text token, system prompt 모두 포함)]
1106
+ if pad_token_id is None:
1107
+ pad_token_id = self.tokenizer.pad_token_id
1108
+ if eos_token_id is None:
1109
+ eos_token_id = self.tokenizer.encode("<|endofturn|>")[0]
1110
+ if bad_words_ids is None:
1111
+ bad_words_ids = [
1112
+ [
1113
+ self.config.language_config["bos_token_id"],
1114
+ ],
1115
+ [
1116
+ self.config.language_config["eos_token_id"],
1117
+ ],
1118
+ ]
1119
+
1120
+ if pixel_values is None:
1121
+ return self.language_model.generate(
1122
+ input_ids, pad_token_id=pad_token_id, eos_token_id=eos_token_id, bad_words_ids=bad_words_ids, **kwargs
1123
+ )
1124
+ inputs_embeds = self.extract_inputs_embeds(
1125
+ input_ids=input_ids,
1126
+ pixel_values=self.to_vision_model_device(pixel_values),
1127
+ image_sizes=image_sizes,
1128
+ vision_query_lengths=vision_query_lengths,
1129
+ non_vision_query_lengths=non_vision_query_lengths,
1130
+ img_start_ids_list=img_start_ids_list,
1131
+ num_queries_vis_abstractors=num_queries_vis_abstractors,
1132
+ num_queries_vis_abstractors_slow=num_queries_vis_abstractors_slow,
1133
+ first_last_frames_slows=first_last_frames_slows,
1134
+ is_videos=is_videos,
1135
+ )
1136
+ inputs_embeds = (
1137
+ inputs_embeds.to(self.base_model.device) if isinstance(inputs_embeds, torch.Tensor) else inputs_embeds
1138
+ )
1139
+
1140
+ # pred : torch.int64 : [batchsize, generated token_length]
1141
+ pred = self.language_model.generate(
1142
+ inputs_embeds=inputs_embeds,
1143
+ pad_token_id=pad_token_id,
1144
+ eos_token_id=eos_token_id,
1145
+ bad_words_ids=bad_words_ids,
1146
+ max_new_tokens=max_length,
1147
+ min_length=min_length,
1148
+ num_beams=num_beams,
1149
+ do_sample=(False if temperature == 0.0 else do_sample), # set do_sample=False if invalid temperature
1150
+ top_k=top_k,
1151
+ top_p=top_p,
1152
+ temperature=temperature,
1153
+ repetition_penalty=repetition_penalty,
1154
+ length_penalty=length_penalty,
1155
+ early_stopping=(False if num_beams <= 1 else True), # set early_stopping=False when not beam_search
1156
+ use_cache=use_cache,
1157
+ )
1158
+
1159
+ return pred
1160
+
1161
+ def to_vision_model_device(self, input_tensor: Union[torch.Tensor, List]) -> Union[torch.Tensor, List]:
1162
+ """Move input tensors to the vision model's device.
1163
+ This method recursively moves input tensors or lists of tensors to the vision model's device.
1164
+
1165
+ Args:
1166
+ input_tensor: Input tensor or list of tensors to be moved to the vision model's device.
1167
+
1168
+ Returns:
1169
+ The input tensor or list of tensors moved to the vision model's device.
1170
+
1171
+ Raises:
1172
+ TypeError: If the input is neither a tensor nor a list.
1173
+ """
1174
+ if isinstance(input_tensor, list):
1175
+ return [self.to_vision_model_device(item) for item in input_tensor]
1176
+ elif isinstance(input_tensor, torch.Tensor):
1177
+ return input_tensor.to(self.vision_model.device)
1178
+ else:
1179
+ raise TypeError("Unsupported data type. Only tensors and lists are allowed.")
1180
+
1181
+ def prepare_inputs_for_generation(
1182
+ self,
1183
+ input_ids: torch.LongTensor,
1184
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1185
+ attention_mask: Optional[torch.FloatTensor] = None,
1186
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1187
+ **kwargs,
1188
+ ) -> Dict[str, Any]:
1189
+ """Prepare inputs for the generation algorithm.
1190
+
1191
+ This method prepares the input for each generation step based on the model's needs.
1192
+
1193
+ Args:
1194
+ input_ids: Input token IDs.
1195
+ past_key_values: Pre-computed key and value states for faster inference.
1196
+ attention_mask: Mask to avoid performing attention on padding token indices.
1197
+ inputs_embeds: Input embeddings. If provided, input_ids will not be used.
1198
+ **kwargs: Additional keyword arguments.
1199
+
1200
+ Returns:
1201
+ Dictionary containing the prepared inputs for the model.
1202
+ """
1203
+ input_ids = kwargs.get("decoder_input_ids", input_ids)
1204
+
1205
+ if past_key_values:
1206
+ input_ids = input_ids[:, -1:]
1207
+
1208
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1209
+ if inputs_embeds is not None and past_key_values is None:
1210
+ model_inputs = {"inputs_embeds": inputs_embeds}
1211
+ else:
1212
+ model_inputs = {"input_ids": input_ids}
1213
+
1214
+ model_inputs.update(
1215
+ {
1216
+ "past_key_values": past_key_values,
1217
+ "use_cache": kwargs.get("use_cache"),
1218
+ "attention_mask": attention_mask,
1219
+ "pixel_values": kwargs.get("pixel_values", None),
1220
+ }
1221
+ )
1222
+ return model_inputs
1223
+
1224
+ @classmethod
1225
+ def from_config(cls, config, vision_model_name_or_path):
1226
+ return cls(config, vision_model_name_or_path)
1227
+
1228
+ @classmethod
1229
+ def from_pretrained(
1230
+ cls,
1231
+ pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
1232
+ *model_args,
1233
+ **kwargs,
1234
+ ) -> "HCXVisionForCausalLM":
1235
+ assert pretrained_model_name_or_path is not None
1236
+
1237
+ save_only_vision = kwargs.pop("save_only_vision") if "save_only_vision" in kwargs else False
1238
+ save_only_qformer = kwargs.pop("save_only_qformer") if "save_only_qformer" in kwargs else False
1239
+ save_shard_size = kwargs.pop("save_shard_size") if "save_shard_size" in kwargs else "5GB"
1240
+
1241
+ if pretrained_model_name_or_path is not None: # when evaluate or load instruction tunned model
1242
+ model: HCXVisionForCausalLM = super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
1243
+ model.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path)
1244
+
1245
+ img_start_id = model.tokenizer.encode(IMG_LOC, add_special_tokens=False)
1246
+ assert (
1247
+ len(img_start_id) == 1
1248
+ ), f'"<|dummy3|>" was not encoded into a single special token. Encoding result: {img_start_id}'
1249
+ model.config.img_start_id = img_start_id[0]
1250
+
1251
+ model.save_only_vision = save_only_vision
1252
+ model.save_only_qformer = save_only_qformer
1253
+ model.save_shard_size = save_shard_size
1254
+
1255
+ return model
1256
+
1257
+ def get_language_model(self):
1258
+ return self.language_model.base_model
1259
+
1260
+ def get_vision_model(self):
1261
+ return self.vision_model
1262
+
1263
+ def save_pretrained(
1264
+ self,
1265
+ save_directory: Union[str, os.PathLike],
1266
+ *args,
1267
+ **kwargs,
1268
+ ):
1269
+ state_dict = kwargs["state_dict"] if "state_dict" in kwargs else self.state_dict()
1270
+ partial_state_dict = self.get_pretrained_state_dict(
1271
+ state_dict,
1272
+ save_directory,
1273
+ )
1274
+ kwargs["state_dict"] = partial_state_dict
1275
+ kwargs["safe_serialization"] = self.is_safetensor_save
1276
+ kwargs.setdefault("max_shard_size", self.save_shard_size)
1277
+ super().save_pretrained(save_directory, *args, **kwargs)
1278
+
1279
+ def get_pretrained_state_dict(self, state_dict, save_dir):
1280
+ vision_key = "vision_model."
1281
+ llm_keys = ["language_model."]
1282
+ head_key = "lm_head."
1283
+
1284
+ for key in list(state_dict.keys()):
1285
+ if self.save_only_vision:
1286
+ for llm_key in llm_keys:
1287
+ if llm_key in key:
1288
+ state_dict.pop(key)
1289
+ if key.startswith(head_key):
1290
+ state_dict.pop(key)
1291
+
1292
+ elif self.save_only_qformer:
1293
+ if f"{vision_key}" in key:
1294
+ state_dict.pop(key)
1295
+
1296
+ return state_dict
1297
+
1298
+ def compute_adaptive_params(
1299
+ self,
1300
+ pixel_values: Optional[List[List[torch.FloatTensor]]] = None,
1301
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1302
+ num_queries_vis_abstractors_slow: Optional[List[List[int]]] = None,
1303
+ image_sizes: Optional[List[List[List[int]]]] = None,
1304
+ is_videos: Optional[List[bool]] = None,
1305
+ first_last_frames_slows: Optional[List[bool]] = None,
1306
+ ) -> Tuple[List[int], List[int], List[List[int]], List[bool], List[List[int]]]:
1307
+ """Compute adaptive parameters for processing different image and video inputs.
1308
+
1309
+ This method calculates parameters needed for adaptive processing, especially when handling
1310
+ variable resolutions or applying the slowfast algorithm to video frames. It flattens
1311
+ batch-level inputs (lists of lists) into single lists representing all images/frames
1312
+ in the batch. Based on slowfast configuration, it may split video frames into 'slow'
1313
+ and 'fast' components, adjusting query counts and grid indices accordingly.
1314
+
1315
+ Args:
1316
+ pixel_values: List of lists of image tensors (per sample). Used to determine the initial number of grids per
1317
+ image/frame.
1318
+ num_queries_vis_abstractors: List of lists (per sample) containing the base number of visual tokens
1319
+ generated by the visual abstractor for each image grid
1320
+ (e.g., 81 for a full grid, 9 for a subsampled/fast grid).
1321
+ num_queries_vis_abstractors_slow: List of lists (per sample) containing the number of visual tokens for the
1322
+ 'slow' path when applying slowfast. Non-zero values here trigger the slowfast processing logic.
1323
+ image_sizes: List of lists (per sample) of original image dimensions ([width, height]).
1324
+ is_videos: List of lists (per sample) of booleans indicating if each input item is part of a video sequence.
1325
+ first_last_frames_slows: List (per sample) of booleans. If True, slowfast logic
1326
+ (if active based on `num_queries_vis_abstractors_slow`) is applied only to the first or last frame(s)
1327
+ within each video sequence.
1328
+
1329
+ Returns:
1330
+ Tuple containing:
1331
+ - num_queries_vis_abstractors: Flattened list of final query counts per processed grid.
1332
+ Values might be adjusted based on slow/fast splitting
1333
+ (e.g., using values from `num_queries_vis_abstractors_slow` for slow frames).
1334
+ Example: [81, 81, 81, 9, 81, 9, ...] (Image, Image, Vid_Slow, Vid_Fast, Vid_Slow, Vid_Fast...)
1335
+ - num_grids: Flattened list representing cumulative grid counts, acting as end indices for slicing the
1336
+ flattened `image_forward_outs`. Adjusted for slow/fast splits.
1337
+ Example: [0, 1, 9, 10, 18, 19, 27, ...] (Indices after Grid0_Slow(1),
1338
+ Grid1_Fast(8), Grid2_Slow(1), Grid3_Fast(8)...).
1339
+ - image_sizes: Flattened list of image dimensions ([width, height]), potentially duplicated if slow/fast
1340
+ splitting occurred.
1341
+ - is_videos: Flattened list of booleans indicating video status, potentially duplicated for
1342
+ slow/fast splits. Example: [False, False, True, True, True, True, ...]
1343
+ (Image1, Image2, Vid_grid1_slow, Vid_grid1_fast, Vid_grid2_slow, Vid_grid2_fast...)
1344
+ - group_ids: List of lists, grouping indices that correspond to the same original image or frame.
1345
+ If a frame is split into slow/fast, its group will contain multiple indices.
1346
+ Example: [[0], [1], [2, 3], [4, 5], ...]
1347
+ (Group for Image1, Group for Image2, Group for Vid1_Slow+Fast, Group for Vid2_Slow+Fast...).
1348
+
1349
+ Raises:
1350
+ AssertionError: If input validation fails (e.g., negative query counts).
1351
+ Exception: If an unexpected case is encountered during slowfast processing.
1352
+ """
1353
+
1354
+ # Check if all elements are integers greater than or equal to 0
1355
+ assert all(
1356
+ all(isinstance(value, int) and value >= 0 for value in sublist) for sublist in num_queries_vis_abstractors
1357
+ ), "All values in num_queries_vis_abstractors must be integers >= 0."
1358
+
1359
+ assert all(
1360
+ all(isinstance(value, int) and value >= 0 for value in sublist)
1361
+ for sublist in num_queries_vis_abstractors_slow
1362
+ ), "All values in num_queries_vis_abstractors_slow must be integers >= 0."
1363
+
1364
+ assert is_videos is not None
1365
+
1366
+ # Is it the first or last image? (for applying slowfast to video processing)
1367
+ is_first_images = []
1368
+ is_last_images = []
1369
+ for is_video in is_videos:
1370
+ for idx, is_video_item in enumerate(is_video):
1371
+ if idx == 0:
1372
+ is_first_images.append(True)
1373
+ else:
1374
+ is_first_images.append(False)
1375
+ if idx == len(is_video) - 1:
1376
+ is_last_images.append(True)
1377
+ else:
1378
+ is_last_images.append(False)
1379
+
1380
+ num_queries_vis_abstractors = list(chain(*num_queries_vis_abstractors))
1381
+ num_queries_vis_abstractors_slow = list(chain(*num_queries_vis_abstractors_slow))
1382
+ image_sizes = list(chain(*image_sizes))
1383
+ is_videos = list(chain(*is_videos))
1384
+ first_last_frames_slows = list(chain(*first_last_frames_slows))
1385
+
1386
+ # Use slowfast mode if there's at least one visual token count greater than 0 in num_queries_vis_abstractors_slow
1387
+ use_slowfast = any([num_query > 0 for num_query in num_queries_vis_abstractors_slow])
1388
+ num_grids = [pixel_value.shape[0] for pixel_value in chain(*pixel_values)]
1389
+ num_grids = [0] + num_grids
1390
+ group_ids = []
1391
+
1392
+ if use_slowfast:
1393
+ new_num_grids = [num_grids[0]]
1394
+ new_num_queries = []
1395
+ new_image_sizes = []
1396
+ new_is_videos = []
1397
+
1398
+ # When using slowfast, split more finely
1399
+ # 0th local grid is slow frame, remaining local grids are fast frames
1400
+ for (
1401
+ num_query,
1402
+ num_query_slow,
1403
+ num_grid,
1404
+ image_size,
1405
+ is_video,
1406
+ first_last_frames_slow,
1407
+ is_first_image,
1408
+ is_last_image,
1409
+ ) in zip(
1410
+ num_queries_vis_abstractors,
1411
+ num_queries_vis_abstractors_slow,
1412
+ num_grids[1:],
1413
+ image_sizes,
1414
+ is_videos,
1415
+ first_last_frames_slows,
1416
+ is_first_images,
1417
+ is_last_images,
1418
+ ):
1419
+
1420
+ if not first_last_frames_slow and num_query_slow > 0: # Process all image in slowfast mode
1421
+ assert is_video # slowfast mode is only applied to videos
1422
+
1423
+ this_group_ids = [group_ids[-1][-1] + 1 if group_ids else 0]
1424
+
1425
+ # slow frame (first grid)
1426
+ new_num_grids.append(new_num_grids[-1] + 1)
1427
+ new_num_queries.append(num_query_slow)
1428
+ new_image_sizes.append(image_size)
1429
+ new_is_videos.append(is_video)
1430
+
1431
+ if num_grid >= 2:
1432
+ # fast frames
1433
+ new_num_grids.append(new_num_grids[-1] + num_grid - 1)
1434
+ new_num_queries.append(num_query)
1435
+ new_image_sizes.append(image_size)
1436
+ new_is_videos.append(is_video)
1437
+ this_group_ids.append(this_group_ids[-1] + 1)
1438
+
1439
+ group_ids.append(this_group_ids)
1440
+ elif (
1441
+ first_last_frames_slow and num_query_slow > 0 and (is_first_image or is_last_image)
1442
+ ): # Process only first/last image in slowfast mode
1443
+ # Case for special treatment of first/last frames in slow mode
1444
+ assert is_video # slowfast mode is only applied to videos
1445
+
1446
+ this_group_ids = [group_ids[-1][-1] + 1 if group_ids else 0]
1447
+
1448
+ if num_grid == 1:
1449
+ # Simply process with slow since there's only one grid
1450
+ new_num_grids.append(new_num_grids[-1] + 1)
1451
+ new_num_queries.append(num_query_slow)
1452
+ new_image_sizes.append(image_size)
1453
+ new_is_videos.append(is_video)
1454
+
1455
+ if num_grid >= 2:
1456
+ # Special treatment for first or last grid depending on is_first_image or is_last_image
1457
+
1458
+ if is_first_image: # includes both first and last
1459
+ # slow frame (first grid)
1460
+ new_num_grids.append(new_num_grids[-1] + 1)
1461
+ new_num_queries.append(num_query_slow)
1462
+ new_image_sizes.append(image_size)
1463
+ new_is_videos.append(is_video)
1464
+ # fast frames
1465
+ new_num_grids.append(new_num_grids[-1] + num_grid - 1)
1466
+ new_num_queries.append(num_query)
1467
+ new_image_sizes.append(image_size)
1468
+ new_is_videos.append(is_video)
1469
+ this_group_ids.append(this_group_ids[-1] + 1)
1470
+ elif is_last_image:
1471
+ # fast frames
1472
+ new_num_grids.append(new_num_grids[-1] + num_grid - 1)
1473
+ new_num_queries.append(num_query)
1474
+ new_image_sizes.append(image_size)
1475
+ new_is_videos.append(is_video)
1476
+ # slow frame (last grid)
1477
+ new_num_grids.append(new_num_grids[-1] + 1)
1478
+ new_num_queries.append(num_query_slow)
1479
+ new_image_sizes.append(image_size)
1480
+ new_is_videos.append(is_video)
1481
+ this_group_ids.append(this_group_ids[-1] + 1)
1482
+ else:
1483
+ raise Exception("This case should not be reached.")
1484
+ group_ids.append(this_group_ids)
1485
+ else:
1486
+ # Not in slowfast mode, so reduce all by num_query (fast)
1487
+ new_num_grids.append(new_num_grids[-1] + num_grid)
1488
+ new_num_queries.append(num_query)
1489
+ new_image_sizes.append(image_size)
1490
+ new_is_videos.append(is_video)
1491
+
1492
+ start_group_id = group_ids[-1][-1] + 1 if group_ids else 0
1493
+ group_ids.append([start_group_id])
1494
+
1495
+ num_grids = new_num_grids
1496
+ num_queries_vis_abstractors = new_num_queries
1497
+ image_sizes = new_image_sizes
1498
+ is_videos = new_is_videos
1499
+ else:
1500
+ num_grids = [sum(num_grids[:i]) for i in range(1, len(num_grids) + 1)]
1501
+ group_ids = [[group_id] for group_id in range(len(is_videos))]
1502
+
1503
+ return num_queries_vis_abstractors, num_grids, image_sizes, is_videos, group_ids
1504
+
1505
+
1506
+ def load_state_dict_into_model(model_to_load, state_dict, strict=True, start_prefix=""):
1507
+ # from https://github.com/huggingface/transformers/blob/0a55d9f7376f72ad3ff296d4249840021b03bcc4/src/transformers/modeling_utils.py#L517
1508
+ # Convert old format to new format if needed from a PyTorch state_dict
1509
+ old_keys = []
1510
+ new_keys = []
1511
+ for key in state_dict.keys():
1512
+ new_key = None
1513
+ if "gamma" in key:
1514
+ new_key = key.replace("gamma", "weight")
1515
+ if "beta" in key:
1516
+ new_key = key.replace("beta", "bias")
1517
+ if new_key:
1518
+ old_keys.append(key)
1519
+ new_keys.append(new_key)
1520
+ for old_key, new_key in zip(old_keys, new_keys):
1521
+ state_dict[new_key] = state_dict.pop(old_key)
1522
+
1523
+ # copy state_dict so _load_from_state_dict can modify it
1524
+ metadata = getattr(state_dict, "_metadata", None)
1525
+ state_dict = state_dict.copy()
1526
+ if metadata is not None:
1527
+ state_dict._metadata = metadata
1528
+
1529
+ error_msgs = []
1530
+
1531
+ # PyTorch's `_load_from_state_dict` does not copy parameters in a module's descendants
1532
+ # so we need to apply the function recursively.
1533
+ def load(module: nn.Module, state_dict, prefix=""):
1534
+ local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {})
1535
+ args = (state_dict, prefix, local_metadata, strict, [], [], error_msgs)
1536
+ # Parameters of module and children will start with prefix. We can exit early if there are none in this
1537
+ # state_dict
1538
+ if len([key for key in state_dict if key.startswith(prefix)]) > 0:
1539
+ if is_deepspeed_zero3_enabled():
1540
+ import deepspeed
1541
+
1542
+ # In sharded models, each shard has only part of the full state_dict, so only gather
1543
+ # parameters that are in the current state_dict.
1544
+ named_parameters = dict(module.named_parameters(prefix=prefix[:-1], recurse=False))
1545
+ params_to_gather = [named_parameters[k] for k in state_dict.keys() if k in named_parameters]
1546
+ if len(params_to_gather) > 0:
1547
+ # because zero3 puts placeholders in model params, this context
1548
+ # manager gathers (unpartitions) the params of the current layer, then loads from
1549
+ # the state dict and then re-partitions them again
1550
+ with deepspeed.zero.GatheredParameters(params_to_gather, modifier_rank=0):
1551
+ if torch.distributed.get_rank() == 0:
1552
+ module._load_from_state_dict(*args)
1553
+ else:
1554
+ module._load_from_state_dict(*args)
1555
+
1556
+ for name, child in module._modules.items():
1557
+ if child is not None:
1558
+ load(child, state_dict, prefix + name + ".")
1559
+
1560
+ load(model_to_load, state_dict, prefix=start_prefix)
1561
+ # Delete `state_dict` so it could be collected by GC earlier. Note that `state_dict` is a copy of the argument, so
1562
+ # it's safe to delete it.
1563
+ del state_dict
1564
+
1565
+ return error_msgs
1566
+
1567
+
1568
+ class HCXVisionCAbstractor(nn.Module):
1569
+ """
1570
+ This module is based on C-Abstractor, whose license is under apache-2.0.
1571
+ You can check the original code at https://github.com/khanrc/honeybee/blob/main/honeybee/projectors/projectors.py
1572
+ and we made necessary modifications.
1573
+ """
1574
+
1575
+ def __init__(
1576
+ self,
1577
+ num_queries: int,
1578
+ num_input_tokens: int,
1579
+ encoder_hidden_size: int,
1580
+ hidden_size: int,
1581
+ output_hidden_size: int,
1582
+ pos_emb: bool = True,
1583
+ prenorm: bool = False,
1584
+ ):
1585
+ super().__init__()
1586
+ self.num_input_tokens = num_input_tokens
1587
+ self.output_hidden_size = output_hidden_size
1588
+
1589
+ # Positional embedding
1590
+ if pos_emb:
1591
+ self.pos_emb = torch.nn.Parameter(torch.zeros(1, num_input_tokens, encoder_hidden_size))
1592
+ self.pos_emb.data.normal_(mean=0.0, std=0.02)
1593
+ else:
1594
+ self.pos_emb = None
1595
+
1596
+ # (Optional) Pre-normalization layer
1597
+ if prenorm:
1598
+ self.prenorm = LayerNorm(encoder_hidden_size)
1599
+ else:
1600
+ self.prenorm = None
1601
+
1602
+ self.build_net(num_queries, encoder_hidden_size, hidden_size, output_hidden_size)
1603
+ self.dtype = next(self.parameters()).dtype
1604
+
1605
+ def forward(
1606
+ self,
1607
+ x: torch.Tensor,
1608
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1609
+ num_grids: Optional[List[int]] = None,
1610
+ ) -> torch.Tensor:
1611
+ """
1612
+ Args:
1613
+ x: (B, L, encoder_hidden_size) tensor from the visual backbone (e.g. CLIP visual encoder), including cls token.
1614
+ """
1615
+ if self.prenorm is not None:
1616
+ x = self.prenorm(x)
1617
+
1618
+ if self.pos_emb is not None:
1619
+ x = x + self.pos_emb
1620
+
1621
+ x = self._forward(
1622
+ x,
1623
+ num_queries_vis_abstractors=num_queries_vis_abstractors,
1624
+ num_grids=num_grids,
1625
+ ) # (B, L, output_hidden_size)
1626
+
1627
+ return x
1628
+
1629
+ def _forward(
1630
+ self,
1631
+ x: torch.Tensor,
1632
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1633
+ num_grids: Optional[List[int]] = None,
1634
+ ) -> torch.Tensor:
1635
+ # x: [B, L, dim]
1636
+ B, L, dim = x.shape
1637
+ hw = int(L ** 0.5)
1638
+ x = rearrange(x, "b (h w) d -> b d h w", h=hw, w=hw)
1639
+
1640
+ if num_queries_vis_abstractors is not None:
1641
+ assert num_grids is not None
1642
+ return self._forward_adaptive_num_query(x, num_queries_vis_abstractors, num_grids)
1643
+
1644
+ x = self.net(x)
1645
+ x = rearrange(x, "b d h w -> b (h w) d")
1646
+ x = self.readout(x)
1647
+ return x
1648
+
1649
+ def _forward_adaptive_num_query(
1650
+ self,
1651
+ x: torch.Tensor,
1652
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1653
+ num_grids: Optional[List[int]] = None,
1654
+ ) -> List[torch.Tensor]:
1655
+ # self.net is consisted by 3 layers (s1, sampler, s2)
1656
+ assert len(self.net) == 3
1657
+
1658
+ x = self.net[0](x) # s1
1659
+ new_x = []
1660
+ for i, num_queries in enumerate(num_queries_vis_abstractors):
1661
+ hw = int(num_queries**0.5)
1662
+ sampler = nn.AdaptiveAvgPool2d((hw, hw))
1663
+ out = sampler(x[num_grids[i]:num_grids[i + 1], :])
1664
+ out = self.net[2](out) # s2
1665
+
1666
+ out = rearrange(out, "b d h w -> b (h w) d")
1667
+ out = self.readout(out)
1668
+
1669
+ new_x.append(out)
1670
+ return new_x
1671
+
1672
+ def build_net(
1673
+ self,
1674
+ n_queries: int,
1675
+ encoder_hidden_size: int,
1676
+ hidden_size: int,
1677
+ output_hidden_size: int,
1678
+ depth: int = 3,
1679
+ mlp_depth: int = 2,
1680
+ ):
1681
+ assert (n_queries ** 0.5).is_integer(), f"n_queries must be square number. n_queries: {n_queries}"
1682
+ hw = int(n_queries ** 0.5)
1683
+
1684
+ # RegBlock = ResBlock + SE
1685
+ RegBlock = partial(
1686
+ RegStage,
1687
+ stride=1,
1688
+ dilation=1,
1689
+ act_layer=nn.SiLU,
1690
+ norm_layer=LayerNorm2d,
1691
+ )
1692
+
1693
+ s1 = RegBlock(
1694
+ depth,
1695
+ encoder_hidden_size,
1696
+ hidden_size,
1697
+ )
1698
+ sampler = nn.AdaptiveAvgPool2d((hw, hw))
1699
+ s2 = RegBlock(
1700
+ depth,
1701
+ hidden_size,
1702
+ hidden_size,
1703
+ )
1704
+
1705
+ self.net = nn.Sequential(s1, sampler, s2)
1706
+ self.readout = self.build_mlp(mlp_depth, hidden_size, output_hidden_size)
1707
+
1708
+ def build_mlp(
1709
+ self,
1710
+ depth: int,
1711
+ hidden_size: int,
1712
+ output_hidden_size: int,
1713
+ ):
1714
+ layers = [nn.Linear(hidden_size, output_hidden_size)]
1715
+ for _ in range(1, depth):
1716
+ layers.append(nn.SiLU())
1717
+ layers.append(nn.Linear(output_hidden_size, output_hidden_size))
1718
+ return nn.Sequential(*layers)
1719
+
1720
+ def load_sharded_checkpoint(
1721
+ model, folder, pick_prefix="", replace_prefix_list=[], replace_prefix_dict={}, print_info=True
1722
+ ):
1723
+ if folder is None:
1724
+ return {}
1725
+
1726
+ files = os.listdir(folder)
1727
+
1728
+ # find relevant files
1729
+ pytorch_bin_files = [file for file in files if file.startswith("pytorch_model") and file.endswith(".bin")]
1730
+ safetensor_files = [file for file in files if file.endswith(".safetensors")]
1731
+ shard_index_file = [file for file in files if file.endswith(".index.json")]
1732
+
1733
+ # check if sharded
1734
+ index_present = len(shard_index_file) > 0
1735
+ index_file = os.path.join(folder, shard_index_file[0]) if index_present else []
1736
+
1737
+ # check if safetensor
1738
+ is_safetensor = len(safetensor_files) > 0
1739
+
1740
+ model_keys = model.state_dict().keys()
1741
+
1742
+ if is_safetensor:
1743
+ from safetensors.torch import load_file
1744
+
1745
+ load_function = load_file
1746
+ shard_files = safetensor_files
1747
+ else:
1748
+ load_function = partial(torch.load, map_location="cpu")
1749
+ shard_files = pytorch_bin_files
1750
+
1751
+ # sharded case
1752
+ if index_present:
1753
+ with open(index_file, "r", encoding="utf-8") as f:
1754
+ index = json.load(f)
1755
+ loaded_keys = index["weight_map"].keys()
1756
+ if pick_prefix:
1757
+ loaded_keys = [k[len(pick_prefix) :] for k in loaded_keys if k.startswith(pick_prefix)]
1758
+ if replace_prefix_list:
1759
+ for rep_prefix in replace_prefix_list:
1760
+ loaded_keys = [k[len(rep_prefix) :] if k.startswith(rep_prefix) else k for k in loaded_keys]
1761
+ if replace_prefix_dict:
1762
+ for rep_prefix in replace_prefix_dict:
1763
+ loaded_keys = [
1764
+ k.replace(rep_prefix, replace_prefix_dict[rep_prefix]) if k.startswith(rep_prefix) else k
1765
+ for k in loaded_keys
1766
+ ]
1767
+
1768
+ for i, shard_file in enumerate(shard_files):
1769
+ state_dict = load_function(os.path.join(folder, shard_file))
1770
+
1771
+ # if pick_prefix, use only pick
1772
+ if pick_prefix:
1773
+ state_dict = {k[len(pick_prefix) :]: v for k, v in state_dict.items() if k.startswith(pick_prefix)}
1774
+
1775
+ for rep_prefix in replace_prefix_list:
1776
+ state_dict = {k[len(rep_prefix) :] if k.startswith(rep_prefix) else k: v for k, v in state_dict.items()}
1777
+
1778
+ for rep_prefix in replace_prefix_dict:
1779
+ state_dict = {
1780
+ k.replace(rep_prefix, replace_prefix_dict[rep_prefix]) if k.startswith(rep_prefix) else k: v
1781
+ for k, v in state_dict.items()
1782
+ }
1783
+
1784
+ if is_deepspeed_zero3_enabled():
1785
+ # torch.distributed.barrier()
1786
+ rank = torch.distributed.get_rank()
1787
+ print(f"# [info] ZeRo3 - load sharded no {i}, rank {rank}")
1788
+ load_state_dict_into_model(model, state_dict, strict=False)
1789
+ elif is_fsdp_enabled():
1790
+ if is_local_dist_rank_0():
1791
+ model.load_state_dict(state_dict, strict=False)
1792
+ else:
1793
+ model.load_state_dict(state_dict, strict=False)
1794
+ # Make sure memory is freed before we load the next state dict.
1795
+
1796
+ if not index_present:
1797
+ loaded_keys = state_dict.keys()
1798
+
1799
+ del state_dict
1800
+ gc.collect()
1801
+
1802
+ # missing keys
1803
+ missing_keys = [key for key in model_keys if key not in loaded_keys]
1804
+ unexpected_keys = [key for key in loaded_keys if key not in model_keys]
1805
+
1806
+ if get_rank() == 0 and print_info:
1807
+ print(f"[info] missing_keys: {missing_keys}")
1808
+ print(f"[info] unexpected_keys: {unexpected_keys}")
1809
+
1810
+ return {"missing_keys": missing_keys, "unexpected_keys": unexpected_keys}
preprocessor.py ADDED
@@ -0,0 +1,1583 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import base64
2
+ import copy
3
+ import io
4
+ import math
5
+ import os
6
+ import uuid
7
+ from typing import Dict, List, Optional, Union
8
+ from urllib.parse import urlparse
9
+
10
+ import av
11
+ import cv2
12
+ import numpy as np
13
+ import requests
14
+ import torch
15
+ from decord import VideoReader, cpu
16
+ from PIL import Image, UnidentifiedImageError
17
+ from transformers.image_processing_utils import (
18
+ BaseImageProcessor,
19
+ BatchFeature,
20
+ get_size_dict,
21
+ )
22
+ from transformers.image_transforms import (
23
+ convert_to_rgb,
24
+ get_resize_output_image_size,
25
+ resize,
26
+ to_channel_dimension_format,
27
+ )
28
+ from transformers.image_utils import (
29
+ OPENAI_CLIP_MEAN,
30
+ OPENAI_CLIP_STD,
31
+ ChannelDimension,
32
+ ImageInput,
33
+ PILImageResampling,
34
+ get_image_size,
35
+ infer_channel_dimension_format,
36
+ is_scaled_image,
37
+ make_list_of_images,
38
+ to_numpy_array,
39
+ valid_images,
40
+ )
41
+ from transformers.utils import TensorType, logging
42
+
43
+ logger = logging.get_logger(__name__)
44
+
45
+
46
+ def determine_possible_resolutions(anyres: bool, max_num_grids: int, grid_size: int, use_1x1_grid: bool = False):
47
+ """
48
+ Finds and returns possible resolution combinations with a total number of grids less than or equal to max_num_grids.
49
+
50
+ For example, if max_num_grids is 4, the possible grid combinations are:
51
+ [1x1, 1x2, 1x3, 1x4, 2x1, 2x2, 3x1, 4x1], and the resolutions are calculated accordingly.
52
+
53
+ Example:
54
+ >>> possible_resolutions = determine_possible_resolutions(anyres=True, max_num_grids=4, grid_size=336)
55
+ >>> print(possible_resolutions)
56
+ [[336, 336], [336, 672], [336, 1008], [336, 1344], [672, 336], [672, 672], [1008, 336], [1344, 336]]
57
+
58
+ Args:
59
+ anyres (bool): Whether to allow any resolution combinations up to the maximum grid count.
60
+ max_num_grids (int): The maximum number of grids allowed (height x width must be ≤ this value).
61
+ grid_size (int): The size of each grid in pixels (e.g., 336).
62
+ use_1x1_grid (bool, optional): Whether to include the 1x1 grid as a valid resolution. Defaults to False.
63
+
64
+ Returns:
65
+ List[List[int]]: A list of possible [height, width] resolution pairs.
66
+ """
67
+ possible_resolutions = []
68
+ if anyres:
69
+ assert max_num_grids > 0
70
+ for i in range(1, max_num_grids + 1):
71
+ for j in range(1, max_num_grids + 1):
72
+ if i == 1 and j == 1 and not use_1x1_grid:
73
+ continue
74
+ if i * j <= max_num_grids:
75
+ possible_resolutions.append([i, j])
76
+
77
+ possible_resolutions = [[ys * grid_size, xs * grid_size] for ys, xs in possible_resolutions]
78
+
79
+ return possible_resolutions
80
+
81
+
82
+ def divide_to_grids(image: np.array, grid_size: int, input_data_format=None) -> List[np.array]:
83
+ """
84
+ Divides a local image into grids of size (grid_size x grid_size).
85
+
86
+ Args:
87
+ image (np.array): Input image as a NumPy array.
88
+ grid_size (int): The size (in pixels) of each square grid.
89
+ input_data_format (optional): Optional format specifier (e.g., "channels_first" or "channels_last").
90
+
91
+ Returns:
92
+ List[np.array]: A list of image patches, each of size (grid_size x grid_size).
93
+ """
94
+ grids = []
95
+ height, width = get_image_size(image, channel_dim=input_data_format)
96
+ for i in range(0, height, grid_size):
97
+ for j in range(0, width, grid_size):
98
+ if input_data_format == ChannelDimension.LAST:
99
+ grid = image[i : i + grid_size, j : j + grid_size]
100
+ else:
101
+ grid = image[:, i : i + grid_size, j : j + grid_size]
102
+ grids.append(grid)
103
+
104
+ return grids
105
+
106
+
107
+ def pad(
108
+ image: np.array,
109
+ target_size: tuple,
110
+ background_color=(127, 127, 127),
111
+ input_data_format=None,
112
+ ) -> np.array:
113
+ """
114
+ Pads the input image on the sides (top/bottom and left/right) to match the target height and width.
115
+
116
+ Args:
117
+ image (np.array): Input image as a NumPy array.
118
+ target_size (tuple): Target size as (target_height, target_width).
119
+ background_color (tuple, optional): RGB color value used for padding. Defaults to (127, 127, 127).
120
+ input_data_format (optional): Optional format specifier (e.g., "channels_first" or "channels_last").
121
+
122
+ Returns:
123
+ np.array: The padded image with the specified target size.
124
+ """
125
+ target_height, target_width = target_size
126
+ height, width = get_image_size(image, channel_dim=input_data_format)
127
+
128
+ # result = np.ones((target_height, target_width, image.shape[2]), dtype=image.dtype) * background_color
129
+ result = np.empty((target_height, target_width, image.shape[2]), dtype=image.dtype)
130
+ for i in range(image.shape[2]):
131
+ result[..., i].fill(background_color[i])
132
+
133
+ paste_x = (target_width - width) // 2
134
+ paste_y = (target_height - height) // 2
135
+
136
+ result[paste_y : paste_y + height, paste_x : paste_x + width, :] = image
137
+
138
+ return result
139
+
140
+
141
+ def expand2square(
142
+ image: np.array,
143
+ bboxes_dict=None,
144
+ background_color=(127, 127, 127),
145
+ input_data_format=None,
146
+ ) -> np.array:
147
+ """
148
+ Expands the input image to a square shape by placing it at the center of a new square canvas,
149
+ with padding added to the shorter side (either top/bottom or left/right).
150
+
151
+ The image is always centered on the new canvas, and padding is applied symmetrically.
152
+
153
+ Args:
154
+ image (np.array): Input image as a NumPy array.
155
+ bboxes_dict (dict, optional): A dictionary of bounding boxes, where each value is an NDArray of shape (N, 4, 2)
156
+ with box coordinates in the format [[xtl, ytl], [xtr, ytr], [xbr, ybr], [xbl, ybl]].
157
+ Supports multiple categories (e.g., "ocr", "html") simultaneously.
158
+ background_color (tuple, optional): RGB color to fill the padding area. Defaults to (127, 127, 127).
159
+ input_data_format (optional): Optional format specifier for image data (e.g., "channels_first" or "channels_last").
160
+
161
+ Returns:
162
+ np.array: A square-shaped image with the original image centered and padded as needed.
163
+
164
+ Example:
165
+ >>> _img = np.ones((80, 100), dtype=np.uint8) * 100
166
+ >>> _bboxes_dict = {"words": np.array([[[10, 10], [20, 10], [20, 20], [10, 20]],
167
+ ... [[30, 30], [40, 30], [40, 40], [30, 40]]])}
168
+ >>> _img, _bboxes_dict = expand2square(_img, _bboxes_dict, (255, 255, 255))
169
+ >>> _img.shape
170
+ (100, 100)
171
+ >>> guessed_ocr_bboxes = np.array([[[20, 10], [30, 10], [30, 20], [20, 20]],
172
+ ... [[40, 30], [50, 30], [50, 40], [40, 40]]])
173
+ >>> np.testing.assert_array_almost_equal(_bboxes_dict["words"], guessed_ocr_bboxes) is None
174
+ True
175
+ """
176
+ height, width = get_image_size(image, channel_dim=input_data_format)
177
+ if width == height:
178
+ return image, bboxes_dict
179
+ elif width > height:
180
+ # result = np.ones((width, width, image.shape[2]), dtype=image.dtype) * background_color
181
+ result = np.empty((width, width, image.shape[2]), dtype=image.dtype)
182
+ for i in range(image.shape[2]):
183
+ result[..., i].fill(background_color[i])
184
+
185
+ result[(width - height) // 2 : (width - height) // 2 + height, :] = image
186
+ if bboxes_dict is not None:
187
+ for key in bboxes_dict:
188
+ bboxes_dict[key][:, :, 1] += (width - height) // 2
189
+ return result, bboxes_dict
190
+ else:
191
+ # result = np.ones((height, height, image.shape[2]), dtype=image.dtype) * background_color
192
+ result = np.empty((height, height, image.shape[2]), dtype=image.dtype)
193
+ for i in range(image.shape[2]):
194
+ result[..., i].fill(background_color[i])
195
+
196
+ result[:, (height - width) // 2 : (height - width) // 2 + width] = image
197
+ if bboxes_dict is not None:
198
+ for key in bboxes_dict:
199
+ bboxes_dict[key][:, :, 0] += (height - width) // 2
200
+ return result, bboxes_dict
201
+
202
+
203
+ def resize_longside(
204
+ image: np.array,
205
+ size: int,
206
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
207
+ data_format: Optional[Union[str, ChannelDimension]] = None,
208
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
209
+ ):
210
+ """
211
+ Resizes the image so that its longer side matches the specified size, maintaining the original aspect ratio.
212
+
213
+ Args:
214
+ image (np.array): Input image as a NumPy array.
215
+ size (int): Target size for the longer side of the image.
216
+ resample (PILImageResampling, optional): Resampling method to use during resizing. Defaults to BICUBIC.
217
+ data_format (str or ChannelDimension, optional): Output data format (e.g., "channels_first" or "channels_last").
218
+ input_data_format (str or ChannelDimension, optional): Input data format of the image.
219
+
220
+ Returns:
221
+ np.array: The resized image with its aspect ratio preserved.
222
+ """
223
+ height, width = get_image_size(image, channel_dim=input_data_format)
224
+
225
+ if width == height:
226
+ target_height, target_width = size, size
227
+ elif width > height:
228
+ target_width = size
229
+ target_height = math.ceil(height / width * size)
230
+ else:
231
+ target_width = math.ceil(width / height * size)
232
+ target_height = size
233
+
234
+ return resize(
235
+ image,
236
+ size=(target_height, target_width),
237
+ resample=resample,
238
+ data_format=data_format,
239
+ input_data_format=input_data_format,
240
+ )
241
+
242
+
243
+ def select_best_resolution(original_size: tuple, possible_resolutions: list) -> tuple:
244
+ """
245
+ Selects the best-fit resolution from a list of possible resolutions based on the original image size.
246
+
247
+ This function, adapted from LLaVA-Next
248
+ (https://github.com/huggingface/transformers/blob/v4.40.2/src/transformers/models/llava_next/image_processing_llava_next.py),
249
+ evaluates each resolution by computing its effective and wasted area compared to the original size.
250
+ The optimal resolution is the one that maximizes the effective area while minimizing unused (wasted) space.
251
+
252
+ Args:
253
+ original_size (tuple): The original image size in the format (height, width).
254
+ possible_resolutions (list): A list of candidate resolutions in the format [(height1, width1), (height2, width2), ...].
255
+
256
+ Returns:
257
+ tuple: The best-fit resolution in the format (height, width).
258
+ """
259
+ original_height, original_width = original_size
260
+ best_fit = None
261
+ max_effective_resolution = 0
262
+ min_wasted_resolution = float("inf")
263
+
264
+ for height, width in possible_resolutions:
265
+ scale = min(width / original_width, height / original_height)
266
+ downscaled_width, downscaled_height = int(original_width * scale), int(original_height * scale)
267
+ effective_resolution = min(downscaled_width * downscaled_height, original_width * original_height)
268
+ wasted_resolution = (width * height) - effective_resolution
269
+
270
+ if effective_resolution > max_effective_resolution or (
271
+ effective_resolution == max_effective_resolution and wasted_resolution < min_wasted_resolution
272
+ ):
273
+ max_effective_resolution = effective_resolution
274
+ min_wasted_resolution = wasted_resolution
275
+ best_fit = (height, width)
276
+
277
+ return best_fit
278
+
279
+
280
+ def _get_local_grids_output_size(image: np.array, target_resolution: tuple, input_data_format=None):
281
+ """
282
+ Computes the number of local grids (patches) along the height and width when resizing an image
283
+ to the target resolution.
284
+
285
+ Args:
286
+ image (np.array): Input image as a NumPy array.
287
+ target_resolution (tuple): Target resolution in the format (target_height, target_width).
288
+ input_data_format (optional): Optional format specifier (e.g., "channels_first" or "channels_last").
289
+
290
+ Returns:
291
+ tuple: A tuple (grid_h, grid_w) representing the number of grids along the height and width.
292
+ """
293
+ original_height, original_width = get_image_size(image, channel_dim=input_data_format)
294
+ target_height, target_width = target_resolution
295
+
296
+ scale_w = target_width / original_width
297
+ scale_h = target_height / original_height
298
+
299
+ if scale_w < scale_h:
300
+ new_width = target_width
301
+ new_height = min(math.ceil(original_height * scale_w), target_height)
302
+ else:
303
+ new_height = target_height
304
+ new_width = min(math.ceil(original_width * scale_h), target_width)
305
+
306
+ return new_height, new_width
307
+
308
+
309
+ def determine_anyres_num_vision_patches(
310
+ num_grids,
311
+ image_size,
312
+ grid_size,
313
+ patch_size,
314
+ possible_resolutions,
315
+ anyres=False,
316
+ unpad=True,
317
+ num_queries_vis_abstractor=0,
318
+ num_queries_vis_abstractor_slow=0,
319
+ is_video=False,
320
+ first_last_frames_slow=False, # sample-wise option
321
+ is_first_or_last_frames=False, # grid-wise option
322
+ ):
323
+ """
324
+ Computes the number of visual tokens (patches) based on image resolution, grid configuration, and patch size.
325
+
326
+ This function supports both fixed-size and any-resolution settings, as well as video-specific configurations
327
+ such as handling slow frames and frame position flags.
328
+
329
+ Args:
330
+ num_grids (int): Number of grids per image (e.g., 1 for 1x1, 4 for 2x2, etc.).
331
+ image_size (tuple): The original image size as (height, width).
332
+ grid_size (int): Size of each grid in pixels (e.g., 336).
333
+ patch_size (int): Size of each vision patch (e.g., 14 for ViT models).
334
+ possible_resolutions (list): List of possible resolution tuples [(h1, w1), (h2, w2), ...].
335
+ anyres (bool, optional): Whether to use any-resolution mode. Defaults to False.
336
+ unpad (bool, optional): Whether to unpad the image before computing patches. Defaults to True.
337
+ num_queries_vis_abstractor (int, optional): Number of query tokens for vision abstractor (fast path).
338
+ num_queries_vis_abstractor_slow (int, optional): Number of query tokens for vision abstractor (slow path).
339
+ is_video (bool, optional): Whether the input is a video. Defaults to False.
340
+ first_last_frames_slow (bool, optional): Whether to treat first/last video frames as "slow". Defaults to False.
341
+ is_first_or_last_frames (bool, optional): Whether current grid corresponds to first/last frame. Defaults to False.
342
+
343
+ Returns:
344
+ int: Total number of visual tokens (patches) after processing.
345
+ """
346
+ if not anyres:
347
+ return num_queries_vis_abstractor if num_queries_vis_abstractor > 0 else (grid_size // patch_size) ** 2
348
+
349
+ if num_queries_vis_abstractor > 0:
350
+ num_patch_per_grid = int(num_queries_vis_abstractor**0.5)
351
+ else:
352
+ num_patch_per_grid = grid_size // patch_size
353
+
354
+ num_global_per_grid = num_patch_per_grid
355
+
356
+ # In anyres mode, a global image is included, so there are always at least 2 grids.
357
+ # However, for video inputs, there is no global image, so it's possible to have only 1 grid.
358
+ # Therefore, the assertion below is commented out:
359
+ # assert num_grids > 1
360
+
361
+ # Compute the number of vision patches.
362
+ height, width = select_best_resolution(image_size, possible_resolutions)
363
+
364
+ num_patch_height = (height // grid_size) * num_patch_per_grid
365
+ num_patch_width = (width // grid_size) * num_patch_per_grid
366
+
367
+ # local images
368
+ if unpad:
369
+ original_height, original_width = image_size
370
+
371
+ original_aspect_ratio = original_width / original_height
372
+ current_aspect_ratio = num_patch_width / num_patch_height
373
+
374
+ if original_aspect_ratio > current_aspect_ratio:
375
+ scale_factor = num_patch_width / original_width
376
+ new_height = int(original_height * scale_factor)
377
+ padding = (num_patch_height - new_height) // 2
378
+ num_patch_height = num_patch_height - padding * 2
379
+ else:
380
+ scale_factor = num_patch_height / original_height
381
+ new_width = int(original_width * scale_factor)
382
+ padding = (num_patch_width - new_width) // 2
383
+ num_patch_width = num_patch_width - padding * 2
384
+
385
+ num_patches = num_patch_width * num_patch_height + num_patch_height
386
+ else:
387
+ num_patches = num_patch_width * num_patch_height
388
+
389
+ # In the "slow" strategy, when applying to first and last frames only, it is applied exclusively to those two frames.
390
+ if num_queries_vis_abstractor_slow > 0:
391
+ if first_last_frames_slow:
392
+ if is_first_or_last_frames:
393
+ num_patches += num_queries_vis_abstractor_slow - num_queries_vis_abstractor
394
+ else:
395
+ num_patches += num_queries_vis_abstractor_slow - num_queries_vis_abstractor
396
+ # The slowfast feature is only applicable when unpad is set to False.
397
+ assert unpad is False
398
+
399
+ # Global image is not included for video inputs.
400
+ if not is_video:
401
+ num_patches += num_global_per_grid**2
402
+
403
+ return num_patches
404
+
405
+
406
+ class HCXVisionProcessor(BaseImageProcessor):
407
+ r"""
408
+ Constructs a VLM image processor.
409
+
410
+ This processor is based on [`CLIPImageProcessor`] and incorporates additional techniques
411
+ for handling high-resolution images, such as flexible resolution support (`anyres`), unpadding,
412
+ square padding, and multi-grid patching strategies.
413
+
414
+ Args:
415
+ do_resize (bool): Whether to resize the image.
416
+ size (Dict[str, int], optional): Target size for resizing, typically with keys `"height"` and `"width"`.
417
+ anyres (bool): Whether to enable the any-resolution (`anyres`) feature, which allows flexible resolution handling via grid division.
418
+ unpad (bool): When `anyres` is enabled, whether to remove visual tokens corresponding to pure padding regions.
419
+ max_num_grids (int): Maximum number of grids allowed per image.
420
+ max_image_cnt (int): Maximum number of images that can be processed at once (used for batching).
421
+ num_queries_vis_abstractor (int): Number of visual query tokens per grid when using a visual resampler (e.g., Perceiver).
422
+ num_queries_vis_abstractor_video_fast (int): Number of visual queries for fast-path video frames.
423
+ num_queries_vis_abstractor_video_slow (int): Number of visual queries for slow-path video frames (e.g., first/last).
424
+ possible_resolutions (List): List of allowed resolution pairs when `anyres` is enabled. Example: [[336, 336], [336, 672], [672, 336]].
425
+ patch_size (int): Patch size for the Vision Transformer (ViT).
426
+ pad_to_square (bool): Whether to pad images to a square shape. If `False`, a center crop is applied to fit ViT input.
427
+ resample (PILImageResampling): Resampling method to use for resizing. Default is `BICUBIC`.
428
+ do_center_crop (bool): Whether to apply center cropping.
429
+ crop_size (Dict[str, int], optional): Size for center cropping.
430
+ do_rescale (bool): Whether to rescale pixel values.
431
+ rescale_factor (float or int): Factor to use for rescaling pixel values (typically `1/255`).
432
+ do_normalize (bool): Whether to normalize pixel values using `image_mean` and `image_std`.
433
+ image_mean (float or List[float], optional): Mean values for normalization. Can be a single float or list of floats per channel.
434
+ image_std (float or List[float], optional): Standard deviation values for normalization. Can be a single float or list of floats per channel.
435
+ do_convert_rgb (bool): Whether to convert the input image to RGB.
436
+ first_last_frames_slow (bool): Whether to treat the first and last frames of a video as “slow path” (processed differently).
437
+
438
+ Attributes:
439
+ model_input_names (List[str]): Names of the expected model inputs. Defaults to `["pixel_values"]`.
440
+ """
441
+
442
+ model_input_names = ["pixel_values"]
443
+
444
+ def __init__(
445
+ self,
446
+ do_resize: bool = True,
447
+ size: Dict[str, int] = None,
448
+ anyres: bool = False,
449
+ unpad: bool = False,
450
+ max_num_grids: int = 9,
451
+ max_image_cnt: int = 12,
452
+ num_queries_vis_abstractor: int = 0,
453
+ num_queries_vis_abstractor_video_fast: int = 0,
454
+ num_queries_vis_abstractor_video_slow: int = 0,
455
+ possible_resolutions: List = [],
456
+ patch_size: int = 14,
457
+ pad_to_square: bool = True,
458
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
459
+ do_center_crop: bool = True,
460
+ crop_size: Dict[str, int] = None,
461
+ do_rescale: bool = True,
462
+ rescale_factor: Union[int, float] = 1 / 255,
463
+ do_normalize: bool = True,
464
+ image_mean: Optional[Union[float, List[float]]] = None,
465
+ image_std: Optional[Union[float, List[float]]] = None,
466
+ do_convert_rgb: bool = True,
467
+ first_last_frames_slow: bool = False,
468
+ **kwargs,
469
+ ) -> None:
470
+ super().__init__(**kwargs)
471
+ size = size if size is not None else {"shortest_edge": 512}
472
+ size = get_size_dict(size, default_to_square=False)
473
+ crop_size = crop_size if crop_size is not None else {"height": 512, "width": 512}
474
+ crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size")
475
+
476
+ self.do_resize = do_resize
477
+ self.size = size
478
+ self.anyres = anyres
479
+ self.unpad = unpad
480
+ self.max_num_grids = max_num_grids
481
+ self.max_image_cnt = max_image_cnt
482
+ self.num_queries_vis_abstractor = num_queries_vis_abstractor
483
+ self.num_queries_vis_abstractor_video_fast = num_queries_vis_abstractor_video_fast
484
+ self.num_queries_vis_abstractor_video_slow = num_queries_vis_abstractor_video_slow
485
+ self.possible_resolutions = [_resolution for _resolution in possible_resolutions]
486
+ self.patch_size = patch_size
487
+ self.pad_to_square = pad_to_square
488
+ self.resample = resample
489
+ self.do_center_crop = do_center_crop
490
+ self.crop_size = crop_size
491
+ self.do_rescale = do_rescale
492
+ self.rescale_factor = rescale_factor
493
+ self.do_normalize = do_normalize
494
+ self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN
495
+ self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD
496
+ self.do_convert_rgb = do_convert_rgb
497
+ self.first_last_frames_slow = first_last_frames_slow
498
+
499
+ assert self.crop_size["height"] == self.crop_size["width"]
500
+
501
+ def resize(
502
+ self,
503
+ image: np.ndarray,
504
+ size: Dict[str, int],
505
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
506
+ data_format: Optional[Union[str, ChannelDimension]] = None,
507
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
508
+ **kwargs,
509
+ ) -> np.ndarray:
510
+ """
511
+ Resizes the input image to the specified target size.
512
+
513
+ Args:
514
+ image (np.ndarray): The input image to resize.
515
+ size (Dict[str, int]): A dictionary specifying the target size with keys `"height"` and `"width"`.
516
+ resample (PILImageResampling, optional): The resampling filter to use. Defaults to `BICUBIC`.
517
+ data_format (str or ChannelDimension, optional): The desired output data format (e.g., "channels_last").
518
+ input_data_format (str or ChannelDimension, optional): The input data format of the image.
519
+ **kwargs: Additional keyword arguments, if any.
520
+
521
+ Returns:
522
+ np.ndarray: The resized image as a NumPy array.
523
+ """
524
+ default_to_square = True
525
+ if "shortest_edge" in size:
526
+ size = size["shortest_edge"]
527
+ default_to_square = False
528
+ elif "height" in size and "width" in size:
529
+ size = (size["height"], size["width"])
530
+ else:
531
+ raise ValueError("Size must contain either 'shortest_edge' or 'height' and 'width'.")
532
+
533
+ output_size = get_resize_output_image_size(
534
+ image,
535
+ size=size,
536
+ default_to_square=default_to_square,
537
+ input_data_format=input_data_format,
538
+ )
539
+
540
+ return resize(
541
+ image,
542
+ size=output_size,
543
+ resample=resample,
544
+ data_format=data_format,
545
+ input_data_format=input_data_format,
546
+ **kwargs,
547
+ )
548
+
549
+ def _preprocess(
550
+ self,
551
+ images: ImageInput,
552
+ do_resize: bool = None,
553
+ size: Dict[str, int] = None,
554
+ resample: PILImageResampling = None,
555
+ do_center_crop: bool = None,
556
+ crop_size: int = None,
557
+ do_rescale: bool = None,
558
+ rescale_factor: float = None,
559
+ do_normalize: bool = None,
560
+ image_mean: Optional[Union[float, List[float]]] = None,
561
+ image_std: Optional[Union[float, List[float]]] = None,
562
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
563
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
564
+ ) -> Image.Image:
565
+ """
566
+ Applies a sequence of preprocessing operations to the input image(s), including resizing, cropping, rescaling,
567
+ normalization, and format conversion.
568
+
569
+ This method is typically used internally to prepare images for model input.
570
+
571
+ Args:
572
+ images (ImageInput): A single image or a batch of images to preprocess.
573
+ do_resize (bool, optional): Whether to resize the image(s).
574
+ size (Dict[str, int], optional): Target size for resizing, with keys `"height"` and `"width"`.
575
+ resample (PILImageResampling, optional): Resampling method to use for resizing.
576
+ do_center_crop (bool, optional): Whether to apply center cropping.
577
+ crop_size (int, optional): Size of the center crop (applied to both height and width).
578
+ do_rescale (bool, optional): Whether to rescale the image pixel values.
579
+ rescale_factor (float, optional): Factor to use when rescaling pixel values (e.g., 1/255).
580
+ do_normalize (bool, optional): Whether to normalize the image using `image_mean` and `image_std`.
581
+ image_mean (float or List[float], optional): Mean value(s) used for normalization.
582
+ image_std (float or List[float], optional): Standard deviation value(s) used for normalization.
583
+ data_format (ChannelDimension, optional): The desired output data format (e.g., `ChannelDimension.FIRST`).
584
+ input_data_format (str or ChannelDimension, optional): The format of the input image(s).
585
+
586
+ Returns:
587
+ Image.Image: The preprocessed image or batch of images, ready for model input.
588
+ """
589
+ images = make_list_of_images(images)
590
+
591
+ if do_resize:
592
+ images = [
593
+ self.resize(
594
+ image=image,
595
+ size=size,
596
+ resample=resample,
597
+ input_data_format=input_data_format,
598
+ )
599
+ for image in images
600
+ ]
601
+
602
+ if do_center_crop:
603
+ images = [
604
+ self.center_crop(image=image, size=crop_size, input_data_format=input_data_format) for image in images
605
+ ]
606
+
607
+ if do_rescale:
608
+ images = [
609
+ self.rescale(
610
+ image=image,
611
+ scale=rescale_factor,
612
+ input_data_format=input_data_format,
613
+ )
614
+ for image in images
615
+ ]
616
+
617
+ if do_normalize:
618
+ images = [
619
+ self.normalize(
620
+ image=image,
621
+ mean=image_mean,
622
+ std=image_std,
623
+ input_data_format=input_data_format,
624
+ )
625
+ for image in images
626
+ ]
627
+
628
+ images = [
629
+ to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images
630
+ ]
631
+
632
+ return images
633
+
634
+ def _resize_for_local_grids(
635
+ self,
636
+ image: np.array,
637
+ target_resolution: tuple,
638
+ resample,
639
+ input_data_format: ChannelDimension,
640
+ ) -> np.array:
641
+ """
642
+ Resizes the image to the given target resolution for use in local grid processing.
643
+
644
+ This function ensures that the image is properly resized to match the (height, width) specified
645
+ in `target_resolution`, using the provided resampling method. It supports channel-first and
646
+ channel-last formats based on `input_data_format`.
647
+
648
+ Args:
649
+ image (np.array): Input image as a NumPy array.
650
+ target_resolution (tuple): Target resolution as (height, width) for resizing.
651
+ resample: Resampling method to use (e.g., `PILImageResampling.BICUBIC`).
652
+ input_data_format (ChannelDimension): Format of the input image (e.g., `ChannelDimension.FIRST` or `LAST`).
653
+
654
+ Returns:
655
+ np.array: The resized image in NumPy array format.
656
+ """
657
+ new_height, new_width = _get_local_grids_output_size(image, target_resolution, input_data_format)
658
+
659
+ # Resize the image
660
+ resized_image = resize(
661
+ image,
662
+ (new_height, new_width),
663
+ resample=resample,
664
+ input_data_format=input_data_format,
665
+ )
666
+
667
+ return resized_image
668
+
669
+ def _pad_for_patching(
670
+ self,
671
+ image: np.array,
672
+ target_resolution: tuple,
673
+ input_data_format: ChannelDimension,
674
+ ) -> np.array:
675
+ """
676
+ Pads the image to match the target resolution, ensuring compatibility with patch-based models.
677
+
678
+ This is typically used to make sure the image dimensions are divisible by the patch size or to
679
+ meet specific model input requirements. Padding is applied symmetrically where needed.
680
+
681
+ Args:
682
+ image (np.array): Input image as a NumPy array.
683
+ target_resolution (tuple): The desired resolution after padding, in the format (height, width).
684
+ input_data_format (ChannelDimension): Format of the input image (e.g., `ChannelDimension.FIRST` or `LAST`).
685
+
686
+ Returns:
687
+ np.array: The padded image as a NumPy array.
688
+ """
689
+ target_height, target_width = target_resolution
690
+
691
+ background_color = tuple(int(x * 255) for x in self.image_mean)
692
+ padded_image = pad(
693
+ image,
694
+ target_size=(target_height, target_width),
695
+ background_color=background_color,
696
+ input_data_format=input_data_format,
697
+ )
698
+
699
+ return padded_image
700
+
701
+ def get_image_grids(
702
+ self,
703
+ image: np.array,
704
+ possible_resolutions,
705
+ grid_size: int,
706
+ resample: PILImageResampling,
707
+ data_format: ChannelDimension,
708
+ input_data_format: ChannelDimension,
709
+ ) -> List[np.array]:
710
+ """
711
+ Splits the input image into multiple local grids based on possible resolutions and grid size.
712
+
713
+ The function selects the best resolution from the provided list, resizes the image accordingly,
714
+ and divides it into non-overlapping grid patches of size (grid_size x grid_size). It is commonly
715
+ used for any-resolution (anyres) visual processing.
716
+
717
+ Args:
718
+ image (np.array): Input image as a NumPy array.
719
+ possible_resolutions (List[Tuple[int, int]]): List of allowed resolutions to choose from.
720
+ grid_size (int): The size of each grid patch (e.g., 336 pixels).
721
+ resample (PILImageResampling): Resampling method used during resizing.
722
+ data_format (ChannelDimension): Output data format (e.g., `ChannelDimension.FIRST`).
723
+ input_data_format (ChannelDimension): Input data format of the image.
724
+
725
+ Returns:
726
+ List[np.array]: A list of grid image patches as NumPy arrays.
727
+ """
728
+ if not isinstance(possible_resolutions, list):
729
+ raise ValueError("possible_resolutions must be a list of possible resolutions.")
730
+
731
+ image_size = get_image_size(image, channel_dim=input_data_format)
732
+ best_resolution = select_best_resolution(image_size, possible_resolutions)
733
+ resized_image = self._resize_for_local_grids(
734
+ image,
735
+ best_resolution,
736
+ resample=resample,
737
+ input_data_format=input_data_format,
738
+ )
739
+ padded_image = self._pad_for_patching(resized_image, best_resolution, input_data_format=input_data_format)
740
+ local_grids = divide_to_grids(padded_image, grid_size=grid_size, input_data_format=input_data_format)
741
+
742
+ # make sure that all patches are in the input data format
743
+ local_grids = [
744
+ to_channel_dimension_format(grid, channel_dim=data_format, input_channel_dim=input_data_format)
745
+ for grid in local_grids
746
+ ]
747
+
748
+ return local_grids
749
+
750
+ def preprocess(
751
+ self,
752
+ images: ImageInput,
753
+ do_resize: bool = None,
754
+ size: Dict[str, int] = None,
755
+ anyres: bool = None,
756
+ unpad: bool = None,
757
+ is_video_list: List[bool] = None,
758
+ possible_resolutions: List = None,
759
+ patch_size: int = None,
760
+ pad_to_square: bool = None,
761
+ resample: PILImageResampling = None,
762
+ do_center_crop: bool = None,
763
+ crop_size: int = None,
764
+ do_rescale: bool = None,
765
+ rescale_factor: float = None,
766
+ do_normalize: bool = None,
767
+ image_mean: Optional[Union[float, List[float]]] = None,
768
+ image_std: Optional[Union[float, List[float]]] = None,
769
+ do_convert_rgb: bool = None,
770
+ return_tensors: Optional[Union[str, TensorType]] = None,
771
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
772
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
773
+ is_first_or_last_frames: List[bool] = False,
774
+ ):
775
+ """
776
+ Preprocesses images using HCXVisionProcessor.
777
+
778
+ This method prepares images for visual language models by applying resizing, padding, cropping,
779
+ normalization, and tokenization into visual patches. In video mode, each frame is converted to
780
+ a 1D sequence of patches. The `unpad` option is disabled when processing videos.
781
+
782
+ Args:
783
+ images (ImageInput): A single image or a batch of images (PIL, NumPy, or tensor format).
784
+ do_resize (bool, optional): Whether to resize the image(s).
785
+ size (Dict[str, int], optional): Resize target with keys `"height"` and `"width"`.
786
+ anyres (bool, optional): Whether to use any-resolution processing with grid splitting.
787
+ unpad (bool, optional): Whether to remove visual tokens that belong to padding areas (only in non-video mode).
788
+ is_video_list (List[bool], optional): A list indicating which inputs are video frames.
789
+ possible_resolutions (List, optional): List of resolution pairs allowed in `anyres` mode.
790
+ patch_size (int, optional): Patch size for the Vision Transformer (ViT).
791
+ pad_to_square (bool, optional): Whether to pad the image to a square.
792
+ resample (PILImageResampling, optional): Resampling method to use for resizing.
793
+ do_center_crop (bool, optional): Whether to apply center cropping.
794
+ crop_size (int, optional): Target crop size for center cropping.
795
+ do_rescale (bool, optional): Whether to rescale image pixel values.
796
+ rescale_factor (float, optional): Factor for pixel rescaling, e.g., `1/255`.
797
+ do_normalize (bool, optional): Whether to normalize using mean and std.
798
+ image_mean (float or List[float], optional): Mean value(s) for normalization.
799
+ image_std (float or List[float], optional): Standard deviation(s) for normalization.
800
+ do_convert_rgb (bool, optional): Whether to convert the image to RGB.
801
+ return_tensors (str or TensorType, optional): Desired output tensor type (e.g., "pt" for PyTorch).
802
+ data_format (ChannelDimension, optional): Output data format (e.g., `ChannelDimension.FIRST`).
803
+ input_data_format (str or ChannelDimension, optional): Format of the input image.
804
+ is_first_or_last_frames (List[bool], optional): Flags indicating whether each image is a first/last video frame.
805
+
806
+ Returns:
807
+ Tuple:
808
+ pixel_values (List[torch.Tensor]): A list of 4D image tensors ready for model input.
809
+ image_sizes (List[List[int]]): A list of list containing the original width and height [width, height]
810
+ of each image, e.g., `[[width, height], ...]`.
811
+ vision_query_lengths (List[int]): A list of integers representing the number of visual tokens
812
+ each image contributes to the LLM input.
813
+ """
814
+ do_resize = do_resize if do_resize is not None else self.do_resize
815
+ size = size if size is not None else self.size
816
+ size = get_size_dict(size, param_name="size", default_to_square=False)
817
+ anyres = anyres if anyres is not None else self.anyres
818
+ unpad = unpad if unpad is not None else self.unpad
819
+ possible_resolutions = possible_resolutions if possible_resolutions is not None else self.possible_resolutions
820
+ patch_size = patch_size if patch_size is not None else self.patch_size
821
+ pad_to_square = pad_to_square if pad_to_square is not None else self.pad_to_square
822
+ resample = resample if resample is not None else self.resample
823
+ do_center_crop = do_center_crop if do_center_crop is not None else self.do_center_crop
824
+ crop_size = crop_size if crop_size is not None else self.crop_size
825
+ crop_size = get_size_dict(crop_size, param_name="crop_size", default_to_square=True)
826
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
827
+ rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
828
+ do_normalize = do_normalize if do_normalize is not None else self.do_normalize
829
+ image_mean = image_mean if image_mean is not None else self.image_mean
830
+ image_std = image_std if image_std is not None else self.image_std
831
+ do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
832
+
833
+ images = make_list_of_images(images)
834
+
835
+ if not valid_images(images):
836
+ raise ValueError(
837
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
838
+ "torch.Tensor, tf.Tensor or jax.ndarray."
839
+ )
840
+
841
+ if do_convert_rgb:
842
+ images = [convert_to_rgb(image) for image in images]
843
+
844
+ # All transformations expect numpy arrays.
845
+ images = [to_numpy_array(image) for image in images]
846
+
847
+ if is_scaled_image(images[0]) and do_rescale:
848
+ logger.warning_once(
849
+ "It looks like you are trying to rescale already rescaled images. If the input"
850
+ " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
851
+ )
852
+
853
+ if input_data_format is None:
854
+ # We assume that all images have the same channel dimension format.
855
+ input_data_format = infer_channel_dimension_format(images[0])
856
+
857
+ new_images = []
858
+ image_sizes = [get_image_size(image, channel_dim=input_data_format) for image in images]
859
+ vision_query_lengths = []
860
+
861
+ assert crop_size["height"] == crop_size["width"]
862
+
863
+ # Padding operations for the global image can become a bottleneck when the original image width or height is large.
864
+ # To mitigate this, the image is first resized such that the longest side is scaled proportionally based on size["shortest_edge"],
865
+ # and then padding is applied to reach the target dimensions.
866
+ if anyres:
867
+ anyres_global_images = copy.deepcopy(images)
868
+ if pad_to_square:
869
+ background_color = tuple(int(x * 255) for x in self.image_mean)
870
+ anyres_global_images = [
871
+ resize_longside(
872
+ copy.deepcopy(image),
873
+ size["shortest_edge"],
874
+ resample,
875
+ input_data_format,
876
+ )
877
+ for image in anyres_global_images
878
+ ]
879
+ anyres_global_images = [
880
+ expand2square(
881
+ image,
882
+ background_color=background_color,
883
+ input_data_format=input_data_format,
884
+ )[0]
885
+ for image in anyres_global_images
886
+ ]
887
+ else:
888
+ anyres_global_images = [
889
+ self.resize(
890
+ image=image,
891
+ size={
892
+ "height": size["shortest_edge"],
893
+ "width": size["shortest_edge"],
894
+ },
895
+ resample=resample,
896
+ input_data_format=input_data_format,
897
+ )
898
+ for image in anyres_global_images
899
+ ]
900
+ else:
901
+ anyres_global_images = [None for _ in range(len(images))]
902
+ if pad_to_square:
903
+ background_color = tuple(int(x * 255) for x in self.image_mean)
904
+ images = [
905
+ resize_longside(image, size["shortest_edge"], resample, input_data_format) for image in images
906
+ ]
907
+ images = [
908
+ expand2square(
909
+ image,
910
+ background_color=background_color,
911
+ input_data_format=input_data_format,
912
+ )[0]
913
+ for image in images
914
+ ]
915
+
916
+ num_queries_vis_abstractors = []
917
+ num_queries_vis_abstractors_slow = []
918
+ first_last_frames_slows = []
919
+
920
+ for image, is_video, anyres_global_image, image_size in zip(
921
+ images, is_video_list, anyres_global_images, image_sizes
922
+ ):
923
+ if is_video:
924
+ num_queries_vis_abstractor = self.num_queries_vis_abstractor_video_fast
925
+ num_queries_vis_abstractor_slow = self.num_queries_vis_abstractor_video_slow
926
+ else:
927
+ num_queries_vis_abstractor = self.num_queries_vis_abstractor
928
+ num_queries_vis_abstractor_slow = 0
929
+
930
+ num_queries_vis_abstractors.append(num_queries_vis_abstractor)
931
+ num_queries_vis_abstractors_slow.append(num_queries_vis_abstractor_slow)
932
+ first_last_frames_slows.append(self.first_last_frames_slow)
933
+
934
+ if anyres:
935
+ # convert image into a list of grids
936
+ # we intentially use the same data format as the input data format
937
+ image_grids = self.get_image_grids(
938
+ image,
939
+ possible_resolutions,
940
+ grid_size=crop_size["height"],
941
+ resample=resample,
942
+ data_format=input_data_format,
943
+ input_data_format=input_data_format,
944
+ )
945
+ # Global image (thumbnail) is not used for video inputs.
946
+ if not is_video:
947
+ image_grids = [anyres_global_image] + image_grids
948
+ else:
949
+ image_grids = [image]
950
+
951
+ pixel_values = self._preprocess(
952
+ image_grids,
953
+ do_resize=do_resize,
954
+ size=size,
955
+ resample=resample,
956
+ do_center_crop=do_center_crop,
957
+ crop_size=crop_size,
958
+ do_rescale=do_rescale,
959
+ rescale_factor=rescale_factor,
960
+ do_normalize=do_normalize,
961
+ image_mean=image_mean,
962
+ image_std=image_std,
963
+ data_format=data_format,
964
+ input_data_format=input_data_format,
965
+ )
966
+
967
+ pixel_values = np.array(pixel_values)
968
+ new_images.append(pixel_values)
969
+
970
+ num_grids = pixel_values.shape[0]
971
+
972
+ vision_query_length = determine_anyres_num_vision_patches(
973
+ num_grids=num_grids,
974
+ image_size=image_size,
975
+ grid_size=crop_size["height"],
976
+ patch_size=patch_size,
977
+ possible_resolutions=possible_resolutions,
978
+ anyres=anyres,
979
+ unpad=False if is_video else unpad,
980
+ num_queries_vis_abstractor=num_queries_vis_abstractor,
981
+ num_queries_vis_abstractor_slow=num_queries_vis_abstractor_slow,
982
+ is_video=is_video,
983
+ first_last_frames_slow=self.first_last_frames_slow,
984
+ is_first_or_last_frames=self.first_last_frames_slow,
985
+ )
986
+
987
+ vision_query_lengths.append(vision_query_length)
988
+
989
+ data = {
990
+ "pixel_values": [[torch.tensor(new_image) for new_image in new_images]],
991
+ "image_sizes": [[[image_size[1], image_size[0]] for image_size in image_sizes]],
992
+ "vision_query_lengths": [vision_query_lengths],
993
+ "is_videos": [is_video_list],
994
+ "num_queries_vis_abstractors": [num_queries_vis_abstractors],
995
+ "num_queries_vis_abstractors_slow": [num_queries_vis_abstractors_slow],
996
+ "first_last_frames_slows": [first_last_frames_slows],
997
+ }
998
+
999
+ return BatchFeature(data=data)
1000
+
1001
+ def load_images_videos(self, vlm_chat):
1002
+ """
1003
+ Loads and prepares images or video frames from a VLM chat input.
1004
+
1005
+ This function parses the input `vlm_chat` object, extracts image or video sources,
1006
+ and loads them into memory as PIL or NumPy images, ready for preprocessing.
1007
+
1008
+ Args:
1009
+ vlm_chat: A VLM chat input structure containing multimodal elements
1010
+ (e.g., images, videos, URLs, or file paths). The format is typically a list of messages
1011
+ with associated media fields.
1012
+
1013
+ Returns:
1014
+ List[Union[PIL.Image.Image, List[PIL.Image.Image]]]:
1015
+ A list of loaded images. For video entries, a list of frames is returned instead of a single image.
1016
+ """
1017
+ vlm_chat = copy.deepcopy(vlm_chat)
1018
+
1019
+ new_vlm_chat = []
1020
+ all_images = [] # images + images_from_videos
1021
+ is_video_list = []
1022
+
1023
+ for line in vlm_chat:
1024
+ if "content" in line:
1025
+ content = line["content"]
1026
+
1027
+ if "image" in content:
1028
+ if "filename" not in content:
1029
+ content["filename"] = f"{uuid.uuid4().hex}.jpg"
1030
+ image_pil = load_image(content["image"])
1031
+ all_images.append(image_pil)
1032
+ is_video_list.append(False)
1033
+ new_vlm_chat.append(line)
1034
+
1035
+ elif "video" in content:
1036
+ video_bytesio = load_video_to_bytesio(content["video"])
1037
+ pil_img_frames, video_time_stamp = process_video(
1038
+ video_bytesio, self.max_num_grids, self.max_image_cnt, self.crop_size["width"]
1039
+ )
1040
+ all_images.extend(pil_img_frames)
1041
+ is_video_list.extend([True] * len(pil_img_frames))
1042
+
1043
+ if "filename" not in content:
1044
+ content["filename"] = f"{uuid.uuid4().hex}.mp4"
1045
+
1046
+ for i, image_time_stamp in enumerate(video_time_stamp):
1047
+ new_line = copy.deepcopy(line)
1048
+ basename, ext = os.path.splitext(content["filename"])
1049
+ new_line["content"]["filename"] = f"{basename}-{i}{ext}"
1050
+ new_line["content"]["video_time_stamp"] = image_time_stamp
1051
+
1052
+ if i == len(video_time_stamp) - 1:
1053
+ new_line["content"]["is_final_grid"] = True
1054
+
1055
+ for last_frame_target_key in ["lens_keywords", "lens_local_keywords", "speech_to_text"]:
1056
+ if last_frame_target_key in content:
1057
+ new_line["content"][last_frame_target_key] = content[last_frame_target_key]
1058
+
1059
+ new_vlm_chat.append(new_line)
1060
+ else:
1061
+ new_vlm_chat.append(line)
1062
+
1063
+ return new_vlm_chat, all_images, is_video_list
1064
+
1065
+
1066
+ def process_video(video_bytesio, max_num_grids, max_image_cnt, vit_input_size):
1067
+ """
1068
+ Processes a video file and extracts frames suitable for vision transformer (ViT) input.
1069
+
1070
+ The function reads video data from a BytesIO object, extracts a limited number of frames
1071
+ based on `max_num_grids` and `max_image_cnt`, and resizes them to the appropriate ViT input size.
1072
+
1073
+ Args:
1074
+ video_bytesio (io.BytesIO): A BytesIO object containing the raw video file data.
1075
+ max_num_grids (int): The maximum number of grids allowed (e.g., for tiling or patching).
1076
+ max_image_cnt (int): The maximum number of frames to extract from the video.
1077
+ vit_input_size (int): The desired input size (height and width) for the ViT model.
1078
+
1079
+ Returns:
1080
+ List[np.ndarray]: A list of processed video frames as NumPy arrays, each resized to (vit_input_size, vit_input_size).
1081
+ """
1082
+ frames, time_interval = video_decoder(
1083
+ video_bytesio, max_num_grids=max_num_grids, max_image_cnt=max_image_cnt, default_interval=0.4
1084
+ )
1085
+ pil_img_frames, video_time_stamp = combine_frames_into_images(
1086
+ frames, time_interval, max_grid_shape=(max_num_grids, 1), vit_input_size=vit_input_size
1087
+ )
1088
+
1089
+ return pil_img_frames, video_time_stamp
1090
+
1091
+
1092
+ def load_image(image_src):
1093
+ """
1094
+ Loads an image from various sources (file path, URL, base64 string, or raw bytes)
1095
+ and returns it as a PIL Image object.
1096
+
1097
+ Args:
1098
+ image_src (str or bytes): The image source. It can be:
1099
+ - A local file path
1100
+ - A URL
1101
+ - A base64-encoded string
1102
+ - Raw image bytes
1103
+
1104
+ Returns:
1105
+ PIL.Image.Image: The loaded image as a PIL Image object.
1106
+
1107
+ Raises:
1108
+ ValueError: If the image cannot be loaded or the format is unsupported.
1109
+ TypeError: If the input is not of type str or bytes.
1110
+ """
1111
+ try:
1112
+ # 1. If input is bytes type
1113
+ if isinstance(image_src, bytes):
1114
+ return Image.open(io.BytesIO(image_src))
1115
+
1116
+ # 2. If input is str type (path, URL, base64)
1117
+ if isinstance(image_src, str):
1118
+ # 2a. Check if it's a Base64 data URI format ('data:image/...')
1119
+ if image_src.startswith("data:image"):
1120
+ try:
1121
+ # Remove the 'data:image/...;base64,' part and decode
1122
+ header, encoded = image_src.split(",", 1)
1123
+ image_bytes = base64.b64decode(encoded)
1124
+ return Image.open(io.BytesIO(image_bytes))
1125
+ except (ValueError, base64.binascii.Error) as e:
1126
+ raise ValueError(f"Invalid base64 data URI format: {e}") from e
1127
+
1128
+ # 2b. Check if it's a URL format ('http://' or 'https://')
1129
+ elif image_src.startswith("http://") or image_src.startswith("https://"):
1130
+ try:
1131
+ response = requests.get(image_src, stream=True, timeout=10)
1132
+ response.raise_for_status() # Raise an exception for HTTP errors
1133
+ image_bytes = response.content
1134
+ return Image.open(io.BytesIO(image_bytes))
1135
+ except requests.exceptions.RequestException as e:
1136
+ raise ValueError(f"Error loading image from URL '{image_src}': {e}") from e
1137
+
1138
+ # 2c. Assume it's a local file path
1139
+ else:
1140
+ return Image.open(image_src)
1141
+
1142
+ else:
1143
+ raise TypeError(f"Unsupported image_src type: {type(image_src)}")
1144
+
1145
+ # Common exception handling
1146
+ except FileNotFoundError:
1147
+ raise ValueError(f"Image loading error: File not found '{image_src}'")
1148
+ except UnidentifiedImageError:
1149
+ raise ValueError("Image loading error: Cannot identify image file format.")
1150
+ except IOError as e:
1151
+ raise ValueError(f"Image loading error (I/O): {e}") from e
1152
+ except Exception as e:
1153
+ raise ValueError(f"Unexpected error during image loading: {e}") from e
1154
+
1155
+
1156
+ def load_video_to_bytesio(video_src):
1157
+ """
1158
+ Loads video data from various sources (file path, URL, base64 string, or raw bytes)
1159
+ and returns an `io.BytesIO` object containing the raw video content.
1160
+
1161
+ Args:
1162
+ video_src (str or bytes): The video source. Supported formats include:
1163
+ - Local file path
1164
+ - URL
1165
+ - Base64-encoded data URI string
1166
+ - Raw video bytes
1167
+
1168
+ Returns:
1169
+ io.BytesIO: A `BytesIO` object containing the loaded video data.
1170
+
1171
+ Raises:
1172
+ ValueError: If the video cannot be loaded due to issues such as an invalid path,
1173
+ URL failure, malformed base64 string, or unsupported format.
1174
+ TypeError: If the input is not a `str` or `bytes` object.
1175
+ """
1176
+ video_bytes = None
1177
+ try:
1178
+ # 1. If input is bytes type
1179
+ if isinstance(video_src, bytes):
1180
+ video_bytes = video_src
1181
+
1182
+ # 2. If input is str type (path, URL, base64)
1183
+ elif isinstance(video_src, str):
1184
+ # 2a. Check if it's a Base64 data URI format ('data:video/...')
1185
+ if video_src.startswith("data:video"):
1186
+ try:
1187
+ # Remove the 'data:video/...;base64,' part and decode
1188
+ header, encoded = video_src.split(",", 1)
1189
+ video_bytes = base64.b64decode(encoded)
1190
+ except (ValueError, base64.binascii.Error) as e:
1191
+ raise ValueError(f"Invalid base64 data URI format: {e}") from e
1192
+
1193
+ # 2b. Check if it looks like a URL
1194
+ elif urlparse(video_src).scheme in ("http", "https"):
1195
+ try:
1196
+ response = requests.get(
1197
+ video_src, stream=True, timeout=30
1198
+ ) # Increased timeout for potentially large videos
1199
+ response.raise_for_status() # Raise an exception for HTTP errors (4xx or 5xx)
1200
+ # Read all content from the stream into bytes
1201
+ video_bytes = response.content
1202
+ except requests.exceptions.MissingSchema:
1203
+ # If urlparse thinks it's a scheme but requests disagrees (e.g., "http:/example.com")
1204
+ # Treat it as a potential file path below.
1205
+ pass
1206
+ except requests.exceptions.RequestException as e:
1207
+ raise ValueError(f"Error loading video from URL '{video_src}': {e}") from e
1208
+
1209
+ # 2c. Assume it's a local file path if not base64 or confirmed URL
1210
+ if video_bytes is None: # Only attempt file read if not already loaded as base64 or URL failed gracefully
1211
+ # Check if it could potentially be a file path
1212
+ # Note: This check is basic. A string like "http:/path/file" might incorrectly be treated as a path here
1213
+ # if the requests call failed due to MissingSchema. More robust path validation could be added.
1214
+ if (
1215
+ os.path.exists(video_src) or "/" in video_src or "\\" in video_src
1216
+ ): # Basic check if it resembles a path
1217
+ try:
1218
+ with open(video_src, "rb") as f:
1219
+ video_bytes = f.read()
1220
+ except FileNotFoundError:
1221
+ raise ValueError(f"Video loading error: File not found at path '{video_src}'")
1222
+ except IsADirectoryError:
1223
+ raise ValueError(f"Video loading error: Path '{video_src}' is a directory, not a file.")
1224
+ except IOError as e:
1225
+ raise ValueError(f"Video loading error (I/O) for path '{video_src}': {e}") from e
1226
+ else:
1227
+ # If it's not base64, not a valid downloadable URL, and doesn't look like a path/doesn't exist
1228
+ raise ValueError(f"Unsupported string input format or resource not found: '{video_src}'")
1229
+
1230
+ # 3. If the type is unsupported
1231
+ else:
1232
+ raise TypeError(f"Unsupported video_src type: {type(video_src)}")
1233
+
1234
+ # Final check if video_bytes was successfully obtained
1235
+ if video_bytes is None:
1236
+ raise ValueError(f"Could not load video data from the provided source: {video_src}")
1237
+
1238
+ # Return the bytes wrapped in BytesIO
1239
+ return io.BytesIO(video_bytes)
1240
+
1241
+ # Catch specific exceptions first for better error reporting
1242
+ except FileNotFoundError as e: # Should be caught above, but as a safeguard
1243
+ raise ValueError(f"Video loading error: File not found '{video_src}'") from e
1244
+ except requests.exceptions.RequestException as e: # Already handled, but for clarity
1245
+ raise ValueError(f"Video loading error (Network): {e}") from e
1246
+ except (ValueError, TypeError) as e: # Re-raise ValueErrors/TypeErrors raised intentionally within the try block
1247
+ raise e
1248
+ except Exception as e:
1249
+ # Catch any other unexpected errors during processing
1250
+ raise ValueError(f"Unexpected error during video loading from source '{video_src}': {e}") from e
1251
+
1252
+
1253
+ def video_decoder(video_bytesio, max_num_grids, max_image_cnt, default_interval=0.4):
1254
+ """
1255
+ Decodes video data from a BytesIO object and returns a list of extracted frames.
1256
+
1257
+ Args:
1258
+ video_bytesio (io.BytesIO): A BytesIO object containing the raw video data.
1259
+ max_num_grids (int): Maximum number of grids allowed per image. Used to determine how many frames to extract.
1260
+ max_image_cnt (int): Maximum number of frames to extract from the video.
1261
+ default_interval (float, optional): Default time interval (in seconds) between frames. Used when frame rate info is unavailable. TODO: make configurable.
1262
+
1263
+ Returns:
1264
+ Tuple:
1265
+ frames (List[PIL.Image.Image]): A list of extracted frames as PIL Images.
1266
+ time_interval (float): Time interval (in seconds) between selected frames.
1267
+ """
1268
+ error_messages = []
1269
+ frames = []
1270
+
1271
+ # 1. Try decoding the video using Decord.
1272
+ try:
1273
+ vr = VideoReader(video_bytesio, ctx=cpu(0), num_threads=8)
1274
+ fps = vr.get_avg_fps()
1275
+ play_time = len(vr) / fps
1276
+ total_frames = len(vr)
1277
+ frame_indices, time_interval = extract_frame_indices(
1278
+ play_time, total_frames, fps, max_num_grids, max_image_cnt, default_interval=default_interval
1279
+ ) # Sample every 0.4 seconds; if the video is too long, apply uniform sampling instead.
1280
+ if frame_indices is None:
1281
+ frame_indices = range(len(vr)) # Convert all frames.
1282
+ batch_frames = vr.get_batch(frame_indices).asnumpy()
1283
+ frames = [Image.fromarray(frame).convert("RGB") for frame in batch_frames]
1284
+ return frames, time_interval
1285
+ except Exception as e:
1286
+ print("error with decord")
1287
+ error_messages.append(f"Decord 실패: {e}")
1288
+
1289
+ # 2. Fallback: Try decoding the video using PyAV.
1290
+ try:
1291
+ container = av.open(video_bytesio)
1292
+ fps = container.streams.video[0].average_rate
1293
+ play_time = len(container) / fps
1294
+ total_frames = len(container)
1295
+ frame_indices, time_interval = extract_frame_indices(
1296
+ play_time, total_frames, fps, max_num_grids, max_image_cnt, default_interval=default_interval
1297
+ ) # Sample frames every 0.4 seconds. If the video is long, use uniform sampling to limit the number of frames.
1298
+ # Even if frame_indices were assigned using Decord, reprocess them to be compatible with PyAV.
1299
+ target_indices = None if frame_indices is None else set(frame_indices)
1300
+ frames = []
1301
+ for i, frame in enumerate(container.decode(video=0)):
1302
+ if target_indices is not None and i not in target_indices:
1303
+ continue # Skip frames that are not in the required indices.
1304
+ pil_frame = Image.fromarray(frame.to_ndarray(format="rgb24")).convert("RGB")
1305
+ frames.append(pil_frame)
1306
+ if frames:
1307
+ return frames, time_interval
1308
+ else:
1309
+ raise Exception("Decoding with PyAV succeeded, but no frames were extracted.")
1310
+ except Exception as e:
1311
+ error_messages.append(f"PyAV failed: {e}")
1312
+
1313
+ # 3. Fallback: Try decoding the video using OpenCV.
1314
+ try:
1315
+ byte_data = np.frombuffer(video_bytesio.getvalue(), dtype=np.uint8)
1316
+ video = cv2.imdecode(byte_data, cv2.IMREAD_UNCHANGED)
1317
+
1318
+ cap = cv2.VideoCapture(video)
1319
+ total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
1320
+ fps = cap.get(cv2.CAP_PROP_FPS)
1321
+ play_time = total_frames / fps
1322
+ frame_indices, time_interval = extract_frame_indices(
1323
+ play_time, total_frames, fps, max_num_grids, max_image_cnt, default_interval=default_interval
1324
+ ) # Sample frames every 0.4 seconds; if the video is too long, apply uniform sampling to limit the total number of frames.
1325
+ if frame_indices is None:
1326
+ frame_indices = range(total_frames) # Convert all frames.
1327
+
1328
+ index_set = set(frame_indices) # Convert to a set for faster lookup.
1329
+ current_index = 0
1330
+
1331
+ while cap.isOpened():
1332
+ ret, frame = cap.read()
1333
+ if not ret:
1334
+ break
1335
+ if current_index in index_set:
1336
+ frames.append(Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)).convert("RGB"))
1337
+ current_index += 1
1338
+ if current_index > max(index_set): # Stop processing once all required indices have been handled.
1339
+ break
1340
+
1341
+ cap.release()
1342
+ if frames:
1343
+ return frames, time_interval
1344
+ except Exception as e:
1345
+ error_messages.append(f"OpenCV failed: {e}")
1346
+
1347
+ if error_messages:
1348
+ raise Exception(f"All decoding attempts have failed.: {error_messages}")
1349
+
1350
+
1351
+ def convert_format_for_multi_image(img, json, convert_key_list=["words", "text", "objects", "entities"]):
1352
+ """
1353
+ Converts the format of image and annotation data from a single-image dataset to a multi-image dataset format.
1354
+
1355
+ Single-image datasets typically return a single image and its associated annotation as individual objects.
1356
+ This function wraps them in a dictionary format used by multi-image datasets.
1357
+
1358
+ Args:
1359
+ img: The input image (e.g., a PIL Image or NumPy array).
1360
+ json: The annotation data associated with the image.
1361
+ convert_key_list (List[str], optional): A list of keys to extract and convert from the original JSON.
1362
+ Defaults to ["words", "text", "objects", "entities"].
1363
+
1364
+ Returns:
1365
+ Tuple[Dict, Dict]:
1366
+ - A dictionary mapping image IDs to images (e.g., {"image_0": img}).
1367
+ - A dictionary mapping image IDs to corresponding annotation JSONs (with filtered keys).
1368
+ """
1369
+ is_multi_image_dataset = isinstance(img, dict)
1370
+ if not is_multi_image_dataset:
1371
+ img = {"00": img}
1372
+
1373
+ for convert_key in convert_key_list:
1374
+ if convert_key in json:
1375
+ json[convert_key] = {"00": json[convert_key]}
1376
+
1377
+ for json_key in json:
1378
+ if "region" in json_key:
1379
+ json[json_key] = {"00": json[json_key]}
1380
+
1381
+ return is_multi_image_dataset, img, json
1382
+
1383
+
1384
+ def convert_tags_for_video(img, json):
1385
+ """
1386
+ Converts <video_00> tags to <image_xx> tags based on the number of video frames.
1387
+
1388
+ In video datasets, annotations often use a generic <video_00> tag. This function replaces that tag
1389
+ with frame-specific tags such as <image_00>, <image_01>, ..., <image_NN> based on the number of frames in `img`.
1390
+
1391
+ Args:
1392
+ img: A list of video frames (e.g., list of PIL Images or NumPy arrays).
1393
+ json: The annotation data containing <video_00> tags to be replaced.
1394
+
1395
+ Returns:
1396
+ Dict: The updated annotation JSON with frame-specific <image_xx> tags.
1397
+ """
1398
+ image_tag = "".join([f"<image_{idx:02d}>" for idx in range(len(img))])
1399
+ # image_tag = "<image_00>" # Use this format to construct and insert image-specific tags.
1400
+ for json_key in json:
1401
+ if "qa_pairs" in json_key:
1402
+ new_qa_pairs = []
1403
+ for qa_pair in json[json_key]:
1404
+ question = qa_pair[0]
1405
+ # Replace <video_00> tags with corresponding <image_xx> tags.
1406
+ question = question.replace("<video_00>", image_tag)
1407
+ new_qa_pairs.append([question, qa_pair[1]])
1408
+ json[json_key] = new_qa_pairs
1409
+
1410
+ return img, json
1411
+
1412
+
1413
+ def split_list(input_list, split_value):
1414
+ """
1415
+ Splits a list into sublists using a specified delimiter value.
1416
+
1417
+ Each time `split_value` is encountered in `input_list`, a new sublist is started.
1418
+ The delimiter itself is not included in the output.
1419
+
1420
+ Args:
1421
+ input_list (List[Any]): The input list to split.
1422
+ split_value (Any): The value used as the delimiter for splitting.
1423
+
1424
+ Returns:
1425
+ List[List[Any]]: A list of sublists, split by the specified delimiter.
1426
+
1427
+ Example:
1428
+ >>> split_list(["a", "b", "|", "c", "d", "|", "e"], "|")
1429
+ [['a', 'b'], ['c', 'd'], ['e']]
1430
+ """
1431
+ temp_list = []
1432
+ result = []
1433
+
1434
+ for value in input_list:
1435
+ if value == split_value:
1436
+ result.append(temp_list)
1437
+ temp_list = []
1438
+ else:
1439
+ temp_list.append(value)
1440
+ result.append(temp_list)
1441
+
1442
+ return result
1443
+
1444
+
1445
+ def combine_frames_into_images(frames, time_interval, max_grid_shape=(3, 3), vit_input_size=378):
1446
+ """
1447
+ Combines a sequence of video frames into grid-based images and generates corresponding time range labels.
1448
+
1449
+ Frames are grouped and arranged into a grid (e.g., 3x3) such that each combined image contains up to
1450
+ `max_grid_shape[0] * max_grid_shape[1]` frames. Each combined image is resized to the given ViT input size.
1451
+
1452
+ Args:
1453
+ frames (List[PIL.Image.Image]): A list of frames extracted from a video.
1454
+ time_interval (float): Time interval (in seconds) between consecutive frames.
1455
+ max_grid_shape (Tuple[int, int], optional): The maximum grid shape as (rows, cols). Defaults to (3, 3).
1456
+ vit_input_size (int, optional): The target size (height and width) for the Vision Transformer input. Defaults to 378.
1457
+
1458
+ Returns:
1459
+ Tuple:
1460
+ image_list (List[PIL.Image.Image]): A list of grid-combined images.
1461
+ image_time_stamps (List[str]): A list of time span labels for each combined image,
1462
+ e.g., ["0.00s~1.50s", "1.50s~3.00s", ...].
1463
+ """
1464
+ # grid_size = int(np.sqrt(max_num_grids))
1465
+ # assert grid_size**2 == max_num_grids, "max_num_grids must be a perfect square."
1466
+ max_num_grids = max_grid_shape[0] * max_grid_shape[1]
1467
+ assert (
1468
+ max_grid_shape[1] == 1
1469
+ ), f"For video processing, decided to concatenate frames horizontally into a wide image."
1470
+
1471
+ # List to store the resulting combined images.
1472
+ image_list = []
1473
+
1474
+ # Calculate the number of canvases needed.
1475
+ num_frames = len(frames)
1476
+ num_canvases = num_frames // max_num_grids
1477
+ leftover_frames = num_frames % max_num_grids
1478
+
1479
+ time_stamp = 0 # second
1480
+ image_time_stamps = []
1481
+
1482
+ for canvas_idx in range(num_canvases):
1483
+ # Initialize the current canvas.
1484
+ combined_image = Image.new(
1485
+ "RGB", (vit_input_size * max_grid_shape[0], vit_input_size * max_grid_shape[1]), color=(0, 0, 0)
1486
+ )
1487
+
1488
+ # Determine the frames to fill in the current canvas.
1489
+ start_idx = canvas_idx * max_num_grids
1490
+ end_idx = min(start_idx + max_num_grids, num_frames)
1491
+
1492
+ for idx in range(start_idx, end_idx):
1493
+ img = frames[idx]
1494
+
1495
+ # Resize each frame to a square shape.
1496
+ img_resized = img.resize((vit_input_size, vit_input_size))
1497
+
1498
+ # Calculate the (row, column) position to place the frame within the grid layout.
1499
+ local_idx = idx - start_idx
1500
+ x_offset = (local_idx % max_grid_shape[0]) * vit_input_size
1501
+ y_offset = (local_idx // max_grid_shape[0]) * vit_input_size
1502
+
1503
+ # Calculate the position to place the frame in the grid.
1504
+ combined_image.paste(img_resized, (x_offset, y_offset))
1505
+
1506
+ # Append the current canvas to the result list.
1507
+ image_list.append(combined_image)
1508
+ frame_cnt = end_idx - start_idx
1509
+ image_time_stamps.append(f"{time_stamp:.2f}s~{time_stamp + frame_cnt * time_interval:.2f}s")
1510
+ time_stamp += frame_cnt * time_interval
1511
+
1512
+ if leftover_frames > 0:
1513
+ # canvas_idx might be undefined; default to 0 if not previously assigned to avoid "referenced before assignment" error.
1514
+ canvas_idx = num_canvases
1515
+ # Add the remaining frames to the final canvas.
1516
+ combined_image = Image.new("RGB", (vit_input_size * leftover_frames, vit_input_size * 1), color=(0, 0, 0))
1517
+
1518
+ for idx in range(leftover_frames):
1519
+ img = frames[num_canvases * max_num_grids + idx]
1520
+
1521
+ # Resize the frame to a square (equal width and height).
1522
+ img_resized = img.resize((vit_input_size, vit_input_size))
1523
+
1524
+ # Calculate the (row, column) position to place the frame within the grid layout.
1525
+ x_offset = (idx % leftover_frames) * vit_input_size
1526
+ y_offset = (idx // leftover_frames) * vit_input_size
1527
+
1528
+ # Calculate the position to place the frame within the grid layout.
1529
+ combined_image.paste(img_resized, (x_offset, y_offset))
1530
+
1531
+ # Add the current canvas to the list of combined images.
1532
+ image_list.append(combined_image)
1533
+ frame_cnt = leftover_frames
1534
+ image_time_stamps.append(f"{time_stamp:.2f}s~{time_stamp + frame_cnt * time_interval:.2f}s")
1535
+ time_stamp += frame_cnt * time_interval
1536
+
1537
+ return image_list, image_time_stamps
1538
+
1539
+
1540
+ def extract_frame_indices(play_time, total_frames, fps, max_num_grids, max_image_cnt, default_interval=0.4):
1541
+ """
1542
+ Extracts specific frame indices from a video based on duration, frame count, and sampling strategy.
1543
+
1544
+ The function determines which frames to extract given the video duration (`play_time`),
1545
+ total frame count, and frame rate. It samples frames at regular intervals (default: 0.4s),
1546
+ but if the number of frames exceeds the limit defined by `max_num_grids * max_image_cnt`,
1547
+ it performs uniform sampling to stay within that limit.
1548
+
1549
+ Args:
1550
+ play_time (float): Total play time of the video in seconds.
1551
+ total_frames (int): Total number of frames in the video.
1552
+ fps (float): Frames per second of the video.
1553
+ max_num_grids (int): Maximum number of grids to display.
1554
+ max_image_cnt (int): Maximum number of images per grid.
1555
+ default_interval (float, optional): Interval in seconds between frame samples. Defaults to 0.4.
1556
+
1557
+ Returns:
1558
+ Tuple:
1559
+ frame_indices (List[int]): A list of selected frame indices.
1560
+ time_interval (float): Time interval between selected frames (in seconds).
1561
+ """
1562
+
1563
+ # Calculate how many frames to extract with the default interval
1564
+ default_frame_count = int(play_time / default_interval)
1565
+
1566
+ # Maximum frames allowed based on max_num_grids and max_image_cnt
1567
+ max_frames_allowed = max_num_grids * max_image_cnt
1568
+
1569
+ # Determine whether we can use the default interval or need uniform sampling
1570
+ if default_frame_count <= max_frames_allowed:
1571
+ # Default interval is sufficient, extract frames every 0.4 seconds
1572
+ frame_interval = int(total_frames / default_frame_count)
1573
+ else:
1574
+ # Use uniform sampling to fit within max_frames_allowed
1575
+ frame_interval = int(total_frames / max_frames_allowed)
1576
+
1577
+ # Extract frame indices at the calculated interval
1578
+ selected_indices = list(range(0, total_frames, frame_interval))
1579
+
1580
+ time_interval = frame_interval / fps
1581
+
1582
+ # Ensure the number of selected indices does not exceed max_frames_allowed
1583
+ return selected_indices[:max_frames_allowed], time_interval
preprocessor_config.json ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "processor_class": "HCXVisionProcessor",
3
+ "auto_map": {
4
+ "AutoProcessor": "preprocessor.HCXVisionProcessor"
5
+ },
6
+ "anyres": true,
7
+ "crop_size": {
8
+ "height": 378,
9
+ "width": 378
10
+ },
11
+ "do_center_crop": true,
12
+ "do_convert_rgb": true,
13
+ "do_normalize": true,
14
+ "do_rescale": true,
15
+ "do_resize": true,
16
+ "max_num_grids": 9,
17
+ "max_image_cnt": 12,
18
+ "num_queries_vis_abstractor": 81,
19
+ "num_queries_vis_abstractor_video_fast": 9,
20
+ "num_queries_vis_abstractor_video_slow": 81,
21
+ "first_last_frames_slow": false,
22
+ "image_mean": [
23
+ 0.5,
24
+ 0.5,
25
+ 0.5
26
+ ],
27
+ "image_processor_type": "HCXVisionProcessor",
28
+ "image_std": [
29
+ 0.5,
30
+ 0.5,
31
+ 0.5
32
+ ],
33
+ "pad_to_square": true,
34
+ "patch_size": 14,
35
+ "possible_resolutions": [
36
+ [
37
+ 378,
38
+ 378
39
+ ],
40
+ [
41
+ 378,
42
+ 756
43
+ ],
44
+ [
45
+ 378,
46
+ 1134
47
+ ],
48
+ [
49
+ 378,
50
+ 1512
51
+ ],
52
+ [
53
+ 378,
54
+ 1890
55
+ ],
56
+ [
57
+ 378,
58
+ 2268
59
+ ],
60
+ [
61
+ 378,
62
+ 2646
63
+ ],
64
+ [
65
+ 378,
66
+ 3024
67
+ ],
68
+ [
69
+ 378,
70
+ 3402
71
+ ],
72
+ [
73
+ 756,
74
+ 378
75
+ ],
76
+ [
77
+ 756,
78
+ 756
79
+ ],
80
+ [
81
+ 756,
82
+ 1134
83
+ ],
84
+ [
85
+ 756,
86
+ 1512
87
+ ],
88
+ [
89
+ 1134,
90
+ 378
91
+ ],
92
+ [
93
+ 1134,
94
+ 756
95
+ ],
96
+ [
97
+ 1134,
98
+ 1134
99
+ ],
100
+ [
101
+ 1512,
102
+ 378
103
+ ],
104
+ [
105
+ 1512,
106
+ 756
107
+ ],
108
+ [
109
+ 1890,
110
+ 378
111
+ ],
112
+ [
113
+ 2268,
114
+ 378
115
+ ],
116
+ [
117
+ 2646,
118
+ 378
119
+ ],
120
+ [
121
+ 3024,
122
+ 378
123
+ ],
124
+ [
125
+ 3402,
126
+ 378
127
+ ]
128
+ ],
129
+ "resample": 2,
130
+ "rescale_factor": 0.00392156862745098,
131
+ "size": {
132
+ "shortest_edge": 378
133
+ },
134
+ "unpad": true
135
+ }
pytorch_model-00001-of-00004.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b34c808a1106a2d41f4a7202226d86d01a74b70fb267b2aa6a5e118b81806248
3
+ size 1995404833
pytorch_model-00002-of-00004.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c461a7f04797ab043c9a28dc90f8f60d5607e97a0a67b7d7660f760128d586df
3
+ size 1963104270
pytorch_model-00003-of-00004.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21b2e6e12f24b28beb82ddc329bc3177f6a8f3f105540801edb6c50cb7233276
3
+ size 1988270158
pytorch_model-00004-of-00004.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c50467661833482053fbbac1ea806cc818f09cde08545b731c7836667e91a99
3
+ size 1495999777
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,830 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 7442487040
4
+ },
5
+ "weight_map": {
6
+ "image_newline": "pytorch_model-00001-of-00004.bin",
7
+ "language_model.lm_head.weight": "pytorch_model-00001-of-00004.bin",
8
+ "language_model.model.embed_tokens.weight": "pytorch_model-00001-of-00004.bin",
9
+ "language_model.model.layers.0.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
10
+ "language_model.model.layers.0.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
11
+ "language_model.model.layers.0.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
12
+ "language_model.model.layers.0.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
13
+ "language_model.model.layers.0.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
14
+ "language_model.model.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
15
+ "language_model.model.layers.0.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
16
+ "language_model.model.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
17
+ "language_model.model.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
18
+ "language_model.model.layers.1.input_layernorm.weight": "pytorch_model-00001-of-00004.bin",
19
+ "language_model.model.layers.1.mlp.down_proj.weight": "pytorch_model-00001-of-00004.bin",
20
+ "language_model.model.layers.1.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
21
+ "language_model.model.layers.1.mlp.up_proj.weight": "pytorch_model-00001-of-00004.bin",
22
+ "language_model.model.layers.1.post_attention_layernorm.weight": "pytorch_model-00001-of-00004.bin",
23
+ "language_model.model.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
24
+ "language_model.model.layers.1.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
25
+ "language_model.model.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
26
+ "language_model.model.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
27
+ "language_model.model.layers.10.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
28
+ "language_model.model.layers.10.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
29
+ "language_model.model.layers.10.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
30
+ "language_model.model.layers.10.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
31
+ "language_model.model.layers.10.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
32
+ "language_model.model.layers.10.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
33
+ "language_model.model.layers.10.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
34
+ "language_model.model.layers.10.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
35
+ "language_model.model.layers.10.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
36
+ "language_model.model.layers.11.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
37
+ "language_model.model.layers.11.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
38
+ "language_model.model.layers.11.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
39
+ "language_model.model.layers.11.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
40
+ "language_model.model.layers.11.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
41
+ "language_model.model.layers.11.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
42
+ "language_model.model.layers.11.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
43
+ "language_model.model.layers.11.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
44
+ "language_model.model.layers.11.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
45
+ "language_model.model.layers.12.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
46
+ "language_model.model.layers.12.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
47
+ "language_model.model.layers.12.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
48
+ "language_model.model.layers.12.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
49
+ "language_model.model.layers.12.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
50
+ "language_model.model.layers.12.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
51
+ "language_model.model.layers.12.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
52
+ "language_model.model.layers.12.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
53
+ "language_model.model.layers.12.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
54
+ "language_model.model.layers.13.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
55
+ "language_model.model.layers.13.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
56
+ "language_model.model.layers.13.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
57
+ "language_model.model.layers.13.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
58
+ "language_model.model.layers.13.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
59
+ "language_model.model.layers.13.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
60
+ "language_model.model.layers.13.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
61
+ "language_model.model.layers.13.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
62
+ "language_model.model.layers.13.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
63
+ "language_model.model.layers.14.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
64
+ "language_model.model.layers.14.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
65
+ "language_model.model.layers.14.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
66
+ "language_model.model.layers.14.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
67
+ "language_model.model.layers.14.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
68
+ "language_model.model.layers.14.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
69
+ "language_model.model.layers.14.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
70
+ "language_model.model.layers.14.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
71
+ "language_model.model.layers.14.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
72
+ "language_model.model.layers.15.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
73
+ "language_model.model.layers.15.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
74
+ "language_model.model.layers.15.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
75
+ "language_model.model.layers.15.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
76
+ "language_model.model.layers.15.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
77
+ "language_model.model.layers.15.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
78
+ "language_model.model.layers.15.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
79
+ "language_model.model.layers.15.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
80
+ "language_model.model.layers.15.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
81
+ "language_model.model.layers.16.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
82
+ "language_model.model.layers.16.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
83
+ "language_model.model.layers.16.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
84
+ "language_model.model.layers.16.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
85
+ "language_model.model.layers.16.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
86
+ "language_model.model.layers.16.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
87
+ "language_model.model.layers.16.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
88
+ "language_model.model.layers.16.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
89
+ "language_model.model.layers.16.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
90
+ "language_model.model.layers.17.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
91
+ "language_model.model.layers.17.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
92
+ "language_model.model.layers.17.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
93
+ "language_model.model.layers.17.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
94
+ "language_model.model.layers.17.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
95
+ "language_model.model.layers.17.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
96
+ "language_model.model.layers.17.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
97
+ "language_model.model.layers.17.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
98
+ "language_model.model.layers.17.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
99
+ "language_model.model.layers.18.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
100
+ "language_model.model.layers.18.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
101
+ "language_model.model.layers.18.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
102
+ "language_model.model.layers.18.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
103
+ "language_model.model.layers.18.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
104
+ "language_model.model.layers.18.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
105
+ "language_model.model.layers.18.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
106
+ "language_model.model.layers.18.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
107
+ "language_model.model.layers.18.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
108
+ "language_model.model.layers.19.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
109
+ "language_model.model.layers.19.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
110
+ "language_model.model.layers.19.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
111
+ "language_model.model.layers.19.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
112
+ "language_model.model.layers.19.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
113
+ "language_model.model.layers.19.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
114
+ "language_model.model.layers.19.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
115
+ "language_model.model.layers.19.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
116
+ "language_model.model.layers.19.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
117
+ "language_model.model.layers.2.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
118
+ "language_model.model.layers.2.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
119
+ "language_model.model.layers.2.mlp.gate_proj.weight": "pytorch_model-00001-of-00004.bin",
120
+ "language_model.model.layers.2.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
121
+ "language_model.model.layers.2.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
122
+ "language_model.model.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
123
+ "language_model.model.layers.2.self_attn.o_proj.weight": "pytorch_model-00001-of-00004.bin",
124
+ "language_model.model.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
125
+ "language_model.model.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
126
+ "language_model.model.layers.20.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
127
+ "language_model.model.layers.20.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
128
+ "language_model.model.layers.20.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
129
+ "language_model.model.layers.20.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
130
+ "language_model.model.layers.20.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
131
+ "language_model.model.layers.20.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
132
+ "language_model.model.layers.20.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
133
+ "language_model.model.layers.20.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
134
+ "language_model.model.layers.20.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
135
+ "language_model.model.layers.21.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
136
+ "language_model.model.layers.21.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
137
+ "language_model.model.layers.21.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
138
+ "language_model.model.layers.21.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
139
+ "language_model.model.layers.21.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
140
+ "language_model.model.layers.21.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
141
+ "language_model.model.layers.21.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
142
+ "language_model.model.layers.21.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
143
+ "language_model.model.layers.21.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
144
+ "language_model.model.layers.22.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
145
+ "language_model.model.layers.22.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
146
+ "language_model.model.layers.22.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
147
+ "language_model.model.layers.22.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
148
+ "language_model.model.layers.22.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
149
+ "language_model.model.layers.22.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
150
+ "language_model.model.layers.22.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
151
+ "language_model.model.layers.22.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
152
+ "language_model.model.layers.22.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
153
+ "language_model.model.layers.23.input_layernorm.weight": "pytorch_model-00003-of-00004.bin",
154
+ "language_model.model.layers.23.mlp.down_proj.weight": "pytorch_model-00003-of-00004.bin",
155
+ "language_model.model.layers.23.mlp.gate_proj.weight": "pytorch_model-00003-of-00004.bin",
156
+ "language_model.model.layers.23.mlp.up_proj.weight": "pytorch_model-00003-of-00004.bin",
157
+ "language_model.model.layers.23.post_attention_layernorm.weight": "pytorch_model-00003-of-00004.bin",
158
+ "language_model.model.layers.23.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
159
+ "language_model.model.layers.23.self_attn.o_proj.weight": "pytorch_model-00003-of-00004.bin",
160
+ "language_model.model.layers.23.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
161
+ "language_model.model.layers.23.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
162
+ "language_model.model.layers.24.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
163
+ "language_model.model.layers.24.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
164
+ "language_model.model.layers.24.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
165
+ "language_model.model.layers.24.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
166
+ "language_model.model.layers.24.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
167
+ "language_model.model.layers.24.self_attn.k_proj.weight": "pytorch_model-00003-of-00004.bin",
168
+ "language_model.model.layers.24.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
169
+ "language_model.model.layers.24.self_attn.q_proj.weight": "pytorch_model-00003-of-00004.bin",
170
+ "language_model.model.layers.24.self_attn.v_proj.weight": "pytorch_model-00003-of-00004.bin",
171
+ "language_model.model.layers.25.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
172
+ "language_model.model.layers.25.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
173
+ "language_model.model.layers.25.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
174
+ "language_model.model.layers.25.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
175
+ "language_model.model.layers.25.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
176
+ "language_model.model.layers.25.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
177
+ "language_model.model.layers.25.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
178
+ "language_model.model.layers.25.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
179
+ "language_model.model.layers.25.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
180
+ "language_model.model.layers.26.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
181
+ "language_model.model.layers.26.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
182
+ "language_model.model.layers.26.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
183
+ "language_model.model.layers.26.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
184
+ "language_model.model.layers.26.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
185
+ "language_model.model.layers.26.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
186
+ "language_model.model.layers.26.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
187
+ "language_model.model.layers.26.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
188
+ "language_model.model.layers.26.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
189
+ "language_model.model.layers.27.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
190
+ "language_model.model.layers.27.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
191
+ "language_model.model.layers.27.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
192
+ "language_model.model.layers.27.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
193
+ "language_model.model.layers.27.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
194
+ "language_model.model.layers.27.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
195
+ "language_model.model.layers.27.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
196
+ "language_model.model.layers.27.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
197
+ "language_model.model.layers.27.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
198
+ "language_model.model.layers.28.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
199
+ "language_model.model.layers.28.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
200
+ "language_model.model.layers.28.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
201
+ "language_model.model.layers.28.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
202
+ "language_model.model.layers.28.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
203
+ "language_model.model.layers.28.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
204
+ "language_model.model.layers.28.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
205
+ "language_model.model.layers.28.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
206
+ "language_model.model.layers.28.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
207
+ "language_model.model.layers.29.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
208
+ "language_model.model.layers.29.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
209
+ "language_model.model.layers.29.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
210
+ "language_model.model.layers.29.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
211
+ "language_model.model.layers.29.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
212
+ "language_model.model.layers.29.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
213
+ "language_model.model.layers.29.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
214
+ "language_model.model.layers.29.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
215
+ "language_model.model.layers.29.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
216
+ "language_model.model.layers.3.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
217
+ "language_model.model.layers.3.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
218
+ "language_model.model.layers.3.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
219
+ "language_model.model.layers.3.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
220
+ "language_model.model.layers.3.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
221
+ "language_model.model.layers.3.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
222
+ "language_model.model.layers.3.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
223
+ "language_model.model.layers.3.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
224
+ "language_model.model.layers.3.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
225
+ "language_model.model.layers.30.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
226
+ "language_model.model.layers.30.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
227
+ "language_model.model.layers.30.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
228
+ "language_model.model.layers.30.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
229
+ "language_model.model.layers.30.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
230
+ "language_model.model.layers.30.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
231
+ "language_model.model.layers.30.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
232
+ "language_model.model.layers.30.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
233
+ "language_model.model.layers.30.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
234
+ "language_model.model.layers.31.input_layernorm.weight": "pytorch_model-00004-of-00004.bin",
235
+ "language_model.model.layers.31.mlp.down_proj.weight": "pytorch_model-00004-of-00004.bin",
236
+ "language_model.model.layers.31.mlp.gate_proj.weight": "pytorch_model-00004-of-00004.bin",
237
+ "language_model.model.layers.31.mlp.up_proj.weight": "pytorch_model-00004-of-00004.bin",
238
+ "language_model.model.layers.31.post_attention_layernorm.weight": "pytorch_model-00004-of-00004.bin",
239
+ "language_model.model.layers.31.self_attn.k_proj.weight": "pytorch_model-00004-of-00004.bin",
240
+ "language_model.model.layers.31.self_attn.o_proj.weight": "pytorch_model-00004-of-00004.bin",
241
+ "language_model.model.layers.31.self_attn.q_proj.weight": "pytorch_model-00004-of-00004.bin",
242
+ "language_model.model.layers.31.self_attn.v_proj.weight": "pytorch_model-00004-of-00004.bin",
243
+ "language_model.model.layers.4.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
244
+ "language_model.model.layers.4.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
245
+ "language_model.model.layers.4.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
246
+ "language_model.model.layers.4.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
247
+ "language_model.model.layers.4.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
248
+ "language_model.model.layers.4.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
249
+ "language_model.model.layers.4.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
250
+ "language_model.model.layers.4.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
251
+ "language_model.model.layers.4.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
252
+ "language_model.model.layers.5.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
253
+ "language_model.model.layers.5.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
254
+ "language_model.model.layers.5.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
255
+ "language_model.model.layers.5.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
256
+ "language_model.model.layers.5.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
257
+ "language_model.model.layers.5.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
258
+ "language_model.model.layers.5.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
259
+ "language_model.model.layers.5.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
260
+ "language_model.model.layers.5.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
261
+ "language_model.model.layers.6.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
262
+ "language_model.model.layers.6.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
263
+ "language_model.model.layers.6.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
264
+ "language_model.model.layers.6.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
265
+ "language_model.model.layers.6.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
266
+ "language_model.model.layers.6.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
267
+ "language_model.model.layers.6.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
268
+ "language_model.model.layers.6.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
269
+ "language_model.model.layers.6.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
270
+ "language_model.model.layers.7.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
271
+ "language_model.model.layers.7.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
272
+ "language_model.model.layers.7.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
273
+ "language_model.model.layers.7.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
274
+ "language_model.model.layers.7.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
275
+ "language_model.model.layers.7.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
276
+ "language_model.model.layers.7.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
277
+ "language_model.model.layers.7.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
278
+ "language_model.model.layers.7.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
279
+ "language_model.model.layers.8.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
280
+ "language_model.model.layers.8.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
281
+ "language_model.model.layers.8.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
282
+ "language_model.model.layers.8.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
283
+ "language_model.model.layers.8.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
284
+ "language_model.model.layers.8.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
285
+ "language_model.model.layers.8.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
286
+ "language_model.model.layers.8.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
287
+ "language_model.model.layers.8.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
288
+ "language_model.model.layers.9.input_layernorm.weight": "pytorch_model-00002-of-00004.bin",
289
+ "language_model.model.layers.9.mlp.down_proj.weight": "pytorch_model-00002-of-00004.bin",
290
+ "language_model.model.layers.9.mlp.gate_proj.weight": "pytorch_model-00002-of-00004.bin",
291
+ "language_model.model.layers.9.mlp.up_proj.weight": "pytorch_model-00002-of-00004.bin",
292
+ "language_model.model.layers.9.post_attention_layernorm.weight": "pytorch_model-00002-of-00004.bin",
293
+ "language_model.model.layers.9.self_attn.k_proj.weight": "pytorch_model-00002-of-00004.bin",
294
+ "language_model.model.layers.9.self_attn.o_proj.weight": "pytorch_model-00002-of-00004.bin",
295
+ "language_model.model.layers.9.self_attn.q_proj.weight": "pytorch_model-00002-of-00004.bin",
296
+ "language_model.model.layers.9.self_attn.v_proj.weight": "pytorch_model-00002-of-00004.bin",
297
+ "language_model.model.norm.weight": "pytorch_model-00004-of-00004.bin",
298
+ "mm_projector.net.0.b1.conv1.bn.bias": "pytorch_model-00004-of-00004.bin",
299
+ "mm_projector.net.0.b1.conv1.bn.weight": "pytorch_model-00004-of-00004.bin",
300
+ "mm_projector.net.0.b1.conv1.conv.weight": "pytorch_model-00004-of-00004.bin",
301
+ "mm_projector.net.0.b1.conv2.bn.bias": "pytorch_model-00004-of-00004.bin",
302
+ "mm_projector.net.0.b1.conv2.bn.weight": "pytorch_model-00004-of-00004.bin",
303
+ "mm_projector.net.0.b1.conv2.conv.weight": "pytorch_model-00004-of-00004.bin",
304
+ "mm_projector.net.0.b1.conv3.bn.bias": "pytorch_model-00004-of-00004.bin",
305
+ "mm_projector.net.0.b1.conv3.bn.weight": "pytorch_model-00004-of-00004.bin",
306
+ "mm_projector.net.0.b1.conv3.conv.weight": "pytorch_model-00004-of-00004.bin",
307
+ "mm_projector.net.0.b1.se.fc1.bias": "pytorch_model-00004-of-00004.bin",
308
+ "mm_projector.net.0.b1.se.fc1.weight": "pytorch_model-00004-of-00004.bin",
309
+ "mm_projector.net.0.b1.se.fc2.bias": "pytorch_model-00004-of-00004.bin",
310
+ "mm_projector.net.0.b1.se.fc2.weight": "pytorch_model-00004-of-00004.bin",
311
+ "mm_projector.net.0.b2.conv1.bn.bias": "pytorch_model-00004-of-00004.bin",
312
+ "mm_projector.net.0.b2.conv1.bn.weight": "pytorch_model-00004-of-00004.bin",
313
+ "mm_projector.net.0.b2.conv1.conv.weight": "pytorch_model-00004-of-00004.bin",
314
+ "mm_projector.net.0.b2.conv2.bn.bias": "pytorch_model-00004-of-00004.bin",
315
+ "mm_projector.net.0.b2.conv2.bn.weight": "pytorch_model-00004-of-00004.bin",
316
+ "mm_projector.net.0.b2.conv2.conv.weight": "pytorch_model-00004-of-00004.bin",
317
+ "mm_projector.net.0.b2.conv3.bn.bias": "pytorch_model-00004-of-00004.bin",
318
+ "mm_projector.net.0.b2.conv3.bn.weight": "pytorch_model-00004-of-00004.bin",
319
+ "mm_projector.net.0.b2.conv3.conv.weight": "pytorch_model-00004-of-00004.bin",
320
+ "mm_projector.net.0.b2.se.fc1.bias": "pytorch_model-00004-of-00004.bin",
321
+ "mm_projector.net.0.b2.se.fc1.weight": "pytorch_model-00004-of-00004.bin",
322
+ "mm_projector.net.0.b2.se.fc2.bias": "pytorch_model-00004-of-00004.bin",
323
+ "mm_projector.net.0.b2.se.fc2.weight": "pytorch_model-00004-of-00004.bin",
324
+ "mm_projector.net.0.b3.conv1.bn.bias": "pytorch_model-00004-of-00004.bin",
325
+ "mm_projector.net.0.b3.conv1.bn.weight": "pytorch_model-00004-of-00004.bin",
326
+ "mm_projector.net.0.b3.conv1.conv.weight": "pytorch_model-00004-of-00004.bin",
327
+ "mm_projector.net.0.b3.conv2.bn.bias": "pytorch_model-00004-of-00004.bin",
328
+ "mm_projector.net.0.b3.conv2.bn.weight": "pytorch_model-00004-of-00004.bin",
329
+ "mm_projector.net.0.b3.conv2.conv.weight": "pytorch_model-00004-of-00004.bin",
330
+ "mm_projector.net.0.b3.conv3.bn.bias": "pytorch_model-00004-of-00004.bin",
331
+ "mm_projector.net.0.b3.conv3.bn.weight": "pytorch_model-00004-of-00004.bin",
332
+ "mm_projector.net.0.b3.conv3.conv.weight": "pytorch_model-00004-of-00004.bin",
333
+ "mm_projector.net.0.b3.se.fc1.bias": "pytorch_model-00004-of-00004.bin",
334
+ "mm_projector.net.0.b3.se.fc1.weight": "pytorch_model-00004-of-00004.bin",
335
+ "mm_projector.net.0.b3.se.fc2.bias": "pytorch_model-00004-of-00004.bin",
336
+ "mm_projector.net.0.b3.se.fc2.weight": "pytorch_model-00004-of-00004.bin",
337
+ "mm_projector.net.2.b1.conv1.bn.bias": "pytorch_model-00004-of-00004.bin",
338
+ "mm_projector.net.2.b1.conv1.bn.weight": "pytorch_model-00004-of-00004.bin",
339
+ "mm_projector.net.2.b1.conv1.conv.weight": "pytorch_model-00004-of-00004.bin",
340
+ "mm_projector.net.2.b1.conv2.bn.bias": "pytorch_model-00004-of-00004.bin",
341
+ "mm_projector.net.2.b1.conv2.bn.weight": "pytorch_model-00004-of-00004.bin",
342
+ "mm_projector.net.2.b1.conv2.conv.weight": "pytorch_model-00004-of-00004.bin",
343
+ "mm_projector.net.2.b1.conv3.bn.bias": "pytorch_model-00004-of-00004.bin",
344
+ "mm_projector.net.2.b1.conv3.bn.weight": "pytorch_model-00004-of-00004.bin",
345
+ "mm_projector.net.2.b1.conv3.conv.weight": "pytorch_model-00004-of-00004.bin",
346
+ "mm_projector.net.2.b1.se.fc1.bias": "pytorch_model-00004-of-00004.bin",
347
+ "mm_projector.net.2.b1.se.fc1.weight": "pytorch_model-00004-of-00004.bin",
348
+ "mm_projector.net.2.b1.se.fc2.bias": "pytorch_model-00004-of-00004.bin",
349
+ "mm_projector.net.2.b1.se.fc2.weight": "pytorch_model-00004-of-00004.bin",
350
+ "mm_projector.net.2.b2.conv1.bn.bias": "pytorch_model-00004-of-00004.bin",
351
+ "mm_projector.net.2.b2.conv1.bn.weight": "pytorch_model-00004-of-00004.bin",
352
+ "mm_projector.net.2.b2.conv1.conv.weight": "pytorch_model-00004-of-00004.bin",
353
+ "mm_projector.net.2.b2.conv2.bn.bias": "pytorch_model-00004-of-00004.bin",
354
+ "mm_projector.net.2.b2.conv2.bn.weight": "pytorch_model-00004-of-00004.bin",
355
+ "mm_projector.net.2.b2.conv2.conv.weight": "pytorch_model-00004-of-00004.bin",
356
+ "mm_projector.net.2.b2.conv3.bn.bias": "pytorch_model-00004-of-00004.bin",
357
+ "mm_projector.net.2.b2.conv3.bn.weight": "pytorch_model-00004-of-00004.bin",
358
+ "mm_projector.net.2.b2.conv3.conv.weight": "pytorch_model-00004-of-00004.bin",
359
+ "mm_projector.net.2.b2.se.fc1.bias": "pytorch_model-00004-of-00004.bin",
360
+ "mm_projector.net.2.b2.se.fc1.weight": "pytorch_model-00004-of-00004.bin",
361
+ "mm_projector.net.2.b2.se.fc2.bias": "pytorch_model-00004-of-00004.bin",
362
+ "mm_projector.net.2.b2.se.fc2.weight": "pytorch_model-00004-of-00004.bin",
363
+ "mm_projector.net.2.b3.conv1.bn.bias": "pytorch_model-00004-of-00004.bin",
364
+ "mm_projector.net.2.b3.conv1.bn.weight": "pytorch_model-00004-of-00004.bin",
365
+ "mm_projector.net.2.b3.conv1.conv.weight": "pytorch_model-00004-of-00004.bin",
366
+ "mm_projector.net.2.b3.conv2.bn.bias": "pytorch_model-00004-of-00004.bin",
367
+ "mm_projector.net.2.b3.conv2.bn.weight": "pytorch_model-00004-of-00004.bin",
368
+ "mm_projector.net.2.b3.conv2.conv.weight": "pytorch_model-00004-of-00004.bin",
369
+ "mm_projector.net.2.b3.conv3.bn.bias": "pytorch_model-00004-of-00004.bin",
370
+ "mm_projector.net.2.b3.conv3.bn.weight": "pytorch_model-00004-of-00004.bin",
371
+ "mm_projector.net.2.b3.conv3.conv.weight": "pytorch_model-00004-of-00004.bin",
372
+ "mm_projector.net.2.b3.se.fc1.bias": "pytorch_model-00004-of-00004.bin",
373
+ "mm_projector.net.2.b3.se.fc1.weight": "pytorch_model-00004-of-00004.bin",
374
+ "mm_projector.net.2.b3.se.fc2.bias": "pytorch_model-00004-of-00004.bin",
375
+ "mm_projector.net.2.b3.se.fc2.weight": "pytorch_model-00004-of-00004.bin",
376
+ "mm_projector.pos_emb": "pytorch_model-00004-of-00004.bin",
377
+ "mm_projector.readout.0.bias": "pytorch_model-00004-of-00004.bin",
378
+ "mm_projector.readout.0.weight": "pytorch_model-00004-of-00004.bin",
379
+ "mm_projector.readout.2.bias": "pytorch_model-00004-of-00004.bin",
380
+ "mm_projector.readout.2.weight": "pytorch_model-00004-of-00004.bin",
381
+ "vision_model.vision_model.embeddings.patch_embedding.bias": "pytorch_model-00001-of-00004.bin",
382
+ "vision_model.vision_model.embeddings.patch_embedding.weight": "pytorch_model-00001-of-00004.bin",
383
+ "vision_model.vision_model.embeddings.position_embedding.weight": "pytorch_model-00001-of-00004.bin",
384
+ "vision_model.vision_model.encoder.layers.0.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
385
+ "vision_model.vision_model.encoder.layers.0.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
386
+ "vision_model.vision_model.encoder.layers.0.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
387
+ "vision_model.vision_model.encoder.layers.0.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
388
+ "vision_model.vision_model.encoder.layers.0.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
389
+ "vision_model.vision_model.encoder.layers.0.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
390
+ "vision_model.vision_model.encoder.layers.0.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
391
+ "vision_model.vision_model.encoder.layers.0.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
392
+ "vision_model.vision_model.encoder.layers.0.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
393
+ "vision_model.vision_model.encoder.layers.0.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
394
+ "vision_model.vision_model.encoder.layers.0.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
395
+ "vision_model.vision_model.encoder.layers.0.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
396
+ "vision_model.vision_model.encoder.layers.0.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
397
+ "vision_model.vision_model.encoder.layers.0.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
398
+ "vision_model.vision_model.encoder.layers.0.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
399
+ "vision_model.vision_model.encoder.layers.0.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
400
+ "vision_model.vision_model.encoder.layers.1.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
401
+ "vision_model.vision_model.encoder.layers.1.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
402
+ "vision_model.vision_model.encoder.layers.1.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
403
+ "vision_model.vision_model.encoder.layers.1.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
404
+ "vision_model.vision_model.encoder.layers.1.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
405
+ "vision_model.vision_model.encoder.layers.1.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
406
+ "vision_model.vision_model.encoder.layers.1.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
407
+ "vision_model.vision_model.encoder.layers.1.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
408
+ "vision_model.vision_model.encoder.layers.1.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
409
+ "vision_model.vision_model.encoder.layers.1.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
410
+ "vision_model.vision_model.encoder.layers.1.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
411
+ "vision_model.vision_model.encoder.layers.1.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
412
+ "vision_model.vision_model.encoder.layers.1.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
413
+ "vision_model.vision_model.encoder.layers.1.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
414
+ "vision_model.vision_model.encoder.layers.1.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
415
+ "vision_model.vision_model.encoder.layers.1.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
416
+ "vision_model.vision_model.encoder.layers.10.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
417
+ "vision_model.vision_model.encoder.layers.10.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
418
+ "vision_model.vision_model.encoder.layers.10.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
419
+ "vision_model.vision_model.encoder.layers.10.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
420
+ "vision_model.vision_model.encoder.layers.10.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
421
+ "vision_model.vision_model.encoder.layers.10.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
422
+ "vision_model.vision_model.encoder.layers.10.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
423
+ "vision_model.vision_model.encoder.layers.10.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
424
+ "vision_model.vision_model.encoder.layers.10.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
425
+ "vision_model.vision_model.encoder.layers.10.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
426
+ "vision_model.vision_model.encoder.layers.10.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
427
+ "vision_model.vision_model.encoder.layers.10.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
428
+ "vision_model.vision_model.encoder.layers.10.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
429
+ "vision_model.vision_model.encoder.layers.10.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
430
+ "vision_model.vision_model.encoder.layers.10.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
431
+ "vision_model.vision_model.encoder.layers.10.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
432
+ "vision_model.vision_model.encoder.layers.11.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
433
+ "vision_model.vision_model.encoder.layers.11.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
434
+ "vision_model.vision_model.encoder.layers.11.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
435
+ "vision_model.vision_model.encoder.layers.11.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
436
+ "vision_model.vision_model.encoder.layers.11.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
437
+ "vision_model.vision_model.encoder.layers.11.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
438
+ "vision_model.vision_model.encoder.layers.11.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
439
+ "vision_model.vision_model.encoder.layers.11.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
440
+ "vision_model.vision_model.encoder.layers.11.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
441
+ "vision_model.vision_model.encoder.layers.11.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
442
+ "vision_model.vision_model.encoder.layers.11.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
443
+ "vision_model.vision_model.encoder.layers.11.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
444
+ "vision_model.vision_model.encoder.layers.11.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
445
+ "vision_model.vision_model.encoder.layers.11.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
446
+ "vision_model.vision_model.encoder.layers.11.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
447
+ "vision_model.vision_model.encoder.layers.11.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
448
+ "vision_model.vision_model.encoder.layers.12.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
449
+ "vision_model.vision_model.encoder.layers.12.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
450
+ "vision_model.vision_model.encoder.layers.12.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
451
+ "vision_model.vision_model.encoder.layers.12.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
452
+ "vision_model.vision_model.encoder.layers.12.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
453
+ "vision_model.vision_model.encoder.layers.12.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
454
+ "vision_model.vision_model.encoder.layers.12.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
455
+ "vision_model.vision_model.encoder.layers.12.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
456
+ "vision_model.vision_model.encoder.layers.12.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
457
+ "vision_model.vision_model.encoder.layers.12.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
458
+ "vision_model.vision_model.encoder.layers.12.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
459
+ "vision_model.vision_model.encoder.layers.12.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
460
+ "vision_model.vision_model.encoder.layers.12.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
461
+ "vision_model.vision_model.encoder.layers.12.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
462
+ "vision_model.vision_model.encoder.layers.12.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
463
+ "vision_model.vision_model.encoder.layers.12.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
464
+ "vision_model.vision_model.encoder.layers.13.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
465
+ "vision_model.vision_model.encoder.layers.13.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
466
+ "vision_model.vision_model.encoder.layers.13.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
467
+ "vision_model.vision_model.encoder.layers.13.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
468
+ "vision_model.vision_model.encoder.layers.13.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
469
+ "vision_model.vision_model.encoder.layers.13.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
470
+ "vision_model.vision_model.encoder.layers.13.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
471
+ "vision_model.vision_model.encoder.layers.13.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
472
+ "vision_model.vision_model.encoder.layers.13.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
473
+ "vision_model.vision_model.encoder.layers.13.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
474
+ "vision_model.vision_model.encoder.layers.13.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
475
+ "vision_model.vision_model.encoder.layers.13.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
476
+ "vision_model.vision_model.encoder.layers.13.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
477
+ "vision_model.vision_model.encoder.layers.13.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
478
+ "vision_model.vision_model.encoder.layers.13.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
479
+ "vision_model.vision_model.encoder.layers.13.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
480
+ "vision_model.vision_model.encoder.layers.14.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
481
+ "vision_model.vision_model.encoder.layers.14.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
482
+ "vision_model.vision_model.encoder.layers.14.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
483
+ "vision_model.vision_model.encoder.layers.14.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
484
+ "vision_model.vision_model.encoder.layers.14.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
485
+ "vision_model.vision_model.encoder.layers.14.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
486
+ "vision_model.vision_model.encoder.layers.14.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
487
+ "vision_model.vision_model.encoder.layers.14.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
488
+ "vision_model.vision_model.encoder.layers.14.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
489
+ "vision_model.vision_model.encoder.layers.14.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
490
+ "vision_model.vision_model.encoder.layers.14.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
491
+ "vision_model.vision_model.encoder.layers.14.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
492
+ "vision_model.vision_model.encoder.layers.14.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
493
+ "vision_model.vision_model.encoder.layers.14.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
494
+ "vision_model.vision_model.encoder.layers.14.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
495
+ "vision_model.vision_model.encoder.layers.14.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
496
+ "vision_model.vision_model.encoder.layers.15.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
497
+ "vision_model.vision_model.encoder.layers.15.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
498
+ "vision_model.vision_model.encoder.layers.15.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
499
+ "vision_model.vision_model.encoder.layers.15.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
500
+ "vision_model.vision_model.encoder.layers.15.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
501
+ "vision_model.vision_model.encoder.layers.15.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
502
+ "vision_model.vision_model.encoder.layers.15.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
503
+ "vision_model.vision_model.encoder.layers.15.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
504
+ "vision_model.vision_model.encoder.layers.15.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
505
+ "vision_model.vision_model.encoder.layers.15.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
506
+ "vision_model.vision_model.encoder.layers.15.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
507
+ "vision_model.vision_model.encoder.layers.15.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
508
+ "vision_model.vision_model.encoder.layers.15.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
509
+ "vision_model.vision_model.encoder.layers.15.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
510
+ "vision_model.vision_model.encoder.layers.15.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
511
+ "vision_model.vision_model.encoder.layers.15.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
512
+ "vision_model.vision_model.encoder.layers.16.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
513
+ "vision_model.vision_model.encoder.layers.16.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
514
+ "vision_model.vision_model.encoder.layers.16.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
515
+ "vision_model.vision_model.encoder.layers.16.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
516
+ "vision_model.vision_model.encoder.layers.16.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
517
+ "vision_model.vision_model.encoder.layers.16.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
518
+ "vision_model.vision_model.encoder.layers.16.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
519
+ "vision_model.vision_model.encoder.layers.16.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
520
+ "vision_model.vision_model.encoder.layers.16.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
521
+ "vision_model.vision_model.encoder.layers.16.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
522
+ "vision_model.vision_model.encoder.layers.16.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
523
+ "vision_model.vision_model.encoder.layers.16.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
524
+ "vision_model.vision_model.encoder.layers.16.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
525
+ "vision_model.vision_model.encoder.layers.16.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
526
+ "vision_model.vision_model.encoder.layers.16.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
527
+ "vision_model.vision_model.encoder.layers.16.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
528
+ "vision_model.vision_model.encoder.layers.17.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
529
+ "vision_model.vision_model.encoder.layers.17.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
530
+ "vision_model.vision_model.encoder.layers.17.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
531
+ "vision_model.vision_model.encoder.layers.17.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
532
+ "vision_model.vision_model.encoder.layers.17.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
533
+ "vision_model.vision_model.encoder.layers.17.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
534
+ "vision_model.vision_model.encoder.layers.17.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
535
+ "vision_model.vision_model.encoder.layers.17.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
536
+ "vision_model.vision_model.encoder.layers.17.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
537
+ "vision_model.vision_model.encoder.layers.17.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
538
+ "vision_model.vision_model.encoder.layers.17.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
539
+ "vision_model.vision_model.encoder.layers.17.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
540
+ "vision_model.vision_model.encoder.layers.17.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
541
+ "vision_model.vision_model.encoder.layers.17.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
542
+ "vision_model.vision_model.encoder.layers.17.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
543
+ "vision_model.vision_model.encoder.layers.17.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
544
+ "vision_model.vision_model.encoder.layers.18.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
545
+ "vision_model.vision_model.encoder.layers.18.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
546
+ "vision_model.vision_model.encoder.layers.18.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
547
+ "vision_model.vision_model.encoder.layers.18.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
548
+ "vision_model.vision_model.encoder.layers.18.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
549
+ "vision_model.vision_model.encoder.layers.18.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
550
+ "vision_model.vision_model.encoder.layers.18.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
551
+ "vision_model.vision_model.encoder.layers.18.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
552
+ "vision_model.vision_model.encoder.layers.18.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
553
+ "vision_model.vision_model.encoder.layers.18.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
554
+ "vision_model.vision_model.encoder.layers.18.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
555
+ "vision_model.vision_model.encoder.layers.18.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
556
+ "vision_model.vision_model.encoder.layers.18.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
557
+ "vision_model.vision_model.encoder.layers.18.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
558
+ "vision_model.vision_model.encoder.layers.18.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
559
+ "vision_model.vision_model.encoder.layers.18.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
560
+ "vision_model.vision_model.encoder.layers.19.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
561
+ "vision_model.vision_model.encoder.layers.19.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
562
+ "vision_model.vision_model.encoder.layers.19.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
563
+ "vision_model.vision_model.encoder.layers.19.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
564
+ "vision_model.vision_model.encoder.layers.19.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
565
+ "vision_model.vision_model.encoder.layers.19.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
566
+ "vision_model.vision_model.encoder.layers.19.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
567
+ "vision_model.vision_model.encoder.layers.19.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
568
+ "vision_model.vision_model.encoder.layers.19.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
569
+ "vision_model.vision_model.encoder.layers.19.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
570
+ "vision_model.vision_model.encoder.layers.19.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
571
+ "vision_model.vision_model.encoder.layers.19.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
572
+ "vision_model.vision_model.encoder.layers.19.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
573
+ "vision_model.vision_model.encoder.layers.19.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
574
+ "vision_model.vision_model.encoder.layers.19.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
575
+ "vision_model.vision_model.encoder.layers.19.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
576
+ "vision_model.vision_model.encoder.layers.2.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
577
+ "vision_model.vision_model.encoder.layers.2.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
578
+ "vision_model.vision_model.encoder.layers.2.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
579
+ "vision_model.vision_model.encoder.layers.2.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
580
+ "vision_model.vision_model.encoder.layers.2.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
581
+ "vision_model.vision_model.encoder.layers.2.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
582
+ "vision_model.vision_model.encoder.layers.2.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
583
+ "vision_model.vision_model.encoder.layers.2.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
584
+ "vision_model.vision_model.encoder.layers.2.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
585
+ "vision_model.vision_model.encoder.layers.2.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
586
+ "vision_model.vision_model.encoder.layers.2.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
587
+ "vision_model.vision_model.encoder.layers.2.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
588
+ "vision_model.vision_model.encoder.layers.2.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
589
+ "vision_model.vision_model.encoder.layers.2.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
590
+ "vision_model.vision_model.encoder.layers.2.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
591
+ "vision_model.vision_model.encoder.layers.2.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
592
+ "vision_model.vision_model.encoder.layers.20.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
593
+ "vision_model.vision_model.encoder.layers.20.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
594
+ "vision_model.vision_model.encoder.layers.20.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
595
+ "vision_model.vision_model.encoder.layers.20.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
596
+ "vision_model.vision_model.encoder.layers.20.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
597
+ "vision_model.vision_model.encoder.layers.20.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
598
+ "vision_model.vision_model.encoder.layers.20.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
599
+ "vision_model.vision_model.encoder.layers.20.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
600
+ "vision_model.vision_model.encoder.layers.20.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
601
+ "vision_model.vision_model.encoder.layers.20.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
602
+ "vision_model.vision_model.encoder.layers.20.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
603
+ "vision_model.vision_model.encoder.layers.20.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
604
+ "vision_model.vision_model.encoder.layers.20.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
605
+ "vision_model.vision_model.encoder.layers.20.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
606
+ "vision_model.vision_model.encoder.layers.20.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
607
+ "vision_model.vision_model.encoder.layers.20.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
608
+ "vision_model.vision_model.encoder.layers.21.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
609
+ "vision_model.vision_model.encoder.layers.21.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
610
+ "vision_model.vision_model.encoder.layers.21.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
611
+ "vision_model.vision_model.encoder.layers.21.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
612
+ "vision_model.vision_model.encoder.layers.21.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
613
+ "vision_model.vision_model.encoder.layers.21.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
614
+ "vision_model.vision_model.encoder.layers.21.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
615
+ "vision_model.vision_model.encoder.layers.21.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
616
+ "vision_model.vision_model.encoder.layers.21.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
617
+ "vision_model.vision_model.encoder.layers.21.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
618
+ "vision_model.vision_model.encoder.layers.21.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
619
+ "vision_model.vision_model.encoder.layers.21.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
620
+ "vision_model.vision_model.encoder.layers.21.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
621
+ "vision_model.vision_model.encoder.layers.21.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
622
+ "vision_model.vision_model.encoder.layers.21.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
623
+ "vision_model.vision_model.encoder.layers.21.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
624
+ "vision_model.vision_model.encoder.layers.22.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
625
+ "vision_model.vision_model.encoder.layers.22.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
626
+ "vision_model.vision_model.encoder.layers.22.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
627
+ "vision_model.vision_model.encoder.layers.22.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
628
+ "vision_model.vision_model.encoder.layers.22.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
629
+ "vision_model.vision_model.encoder.layers.22.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
630
+ "vision_model.vision_model.encoder.layers.22.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
631
+ "vision_model.vision_model.encoder.layers.22.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
632
+ "vision_model.vision_model.encoder.layers.22.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
633
+ "vision_model.vision_model.encoder.layers.22.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
634
+ "vision_model.vision_model.encoder.layers.22.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
635
+ "vision_model.vision_model.encoder.layers.22.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
636
+ "vision_model.vision_model.encoder.layers.22.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
637
+ "vision_model.vision_model.encoder.layers.22.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
638
+ "vision_model.vision_model.encoder.layers.22.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
639
+ "vision_model.vision_model.encoder.layers.22.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
640
+ "vision_model.vision_model.encoder.layers.23.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
641
+ "vision_model.vision_model.encoder.layers.23.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
642
+ "vision_model.vision_model.encoder.layers.23.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
643
+ "vision_model.vision_model.encoder.layers.23.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
644
+ "vision_model.vision_model.encoder.layers.23.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
645
+ "vision_model.vision_model.encoder.layers.23.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
646
+ "vision_model.vision_model.encoder.layers.23.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
647
+ "vision_model.vision_model.encoder.layers.23.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
648
+ "vision_model.vision_model.encoder.layers.23.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
649
+ "vision_model.vision_model.encoder.layers.23.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
650
+ "vision_model.vision_model.encoder.layers.23.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
651
+ "vision_model.vision_model.encoder.layers.23.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
652
+ "vision_model.vision_model.encoder.layers.23.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
653
+ "vision_model.vision_model.encoder.layers.23.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
654
+ "vision_model.vision_model.encoder.layers.23.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
655
+ "vision_model.vision_model.encoder.layers.23.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
656
+ "vision_model.vision_model.encoder.layers.24.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
657
+ "vision_model.vision_model.encoder.layers.24.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
658
+ "vision_model.vision_model.encoder.layers.24.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
659
+ "vision_model.vision_model.encoder.layers.24.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
660
+ "vision_model.vision_model.encoder.layers.24.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
661
+ "vision_model.vision_model.encoder.layers.24.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
662
+ "vision_model.vision_model.encoder.layers.24.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
663
+ "vision_model.vision_model.encoder.layers.24.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
664
+ "vision_model.vision_model.encoder.layers.24.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
665
+ "vision_model.vision_model.encoder.layers.24.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
666
+ "vision_model.vision_model.encoder.layers.24.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
667
+ "vision_model.vision_model.encoder.layers.24.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
668
+ "vision_model.vision_model.encoder.layers.24.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
669
+ "vision_model.vision_model.encoder.layers.24.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
670
+ "vision_model.vision_model.encoder.layers.24.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
671
+ "vision_model.vision_model.encoder.layers.24.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
672
+ "vision_model.vision_model.encoder.layers.25.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
673
+ "vision_model.vision_model.encoder.layers.25.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
674
+ "vision_model.vision_model.encoder.layers.25.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
675
+ "vision_model.vision_model.encoder.layers.25.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
676
+ "vision_model.vision_model.encoder.layers.25.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
677
+ "vision_model.vision_model.encoder.layers.25.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
678
+ "vision_model.vision_model.encoder.layers.25.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
679
+ "vision_model.vision_model.encoder.layers.25.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
680
+ "vision_model.vision_model.encoder.layers.25.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
681
+ "vision_model.vision_model.encoder.layers.25.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
682
+ "vision_model.vision_model.encoder.layers.25.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
683
+ "vision_model.vision_model.encoder.layers.25.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
684
+ "vision_model.vision_model.encoder.layers.25.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
685
+ "vision_model.vision_model.encoder.layers.25.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
686
+ "vision_model.vision_model.encoder.layers.25.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
687
+ "vision_model.vision_model.encoder.layers.25.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
688
+ "vision_model.vision_model.encoder.layers.26.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
689
+ "vision_model.vision_model.encoder.layers.26.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
690
+ "vision_model.vision_model.encoder.layers.26.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
691
+ "vision_model.vision_model.encoder.layers.26.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
692
+ "vision_model.vision_model.encoder.layers.26.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
693
+ "vision_model.vision_model.encoder.layers.26.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
694
+ "vision_model.vision_model.encoder.layers.26.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
695
+ "vision_model.vision_model.encoder.layers.26.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
696
+ "vision_model.vision_model.encoder.layers.26.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
697
+ "vision_model.vision_model.encoder.layers.26.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
698
+ "vision_model.vision_model.encoder.layers.26.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
699
+ "vision_model.vision_model.encoder.layers.26.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
700
+ "vision_model.vision_model.encoder.layers.26.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
701
+ "vision_model.vision_model.encoder.layers.26.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
702
+ "vision_model.vision_model.encoder.layers.26.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
703
+ "vision_model.vision_model.encoder.layers.26.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
704
+ "vision_model.vision_model.encoder.layers.3.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
705
+ "vision_model.vision_model.encoder.layers.3.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
706
+ "vision_model.vision_model.encoder.layers.3.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
707
+ "vision_model.vision_model.encoder.layers.3.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
708
+ "vision_model.vision_model.encoder.layers.3.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
709
+ "vision_model.vision_model.encoder.layers.3.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
710
+ "vision_model.vision_model.encoder.layers.3.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
711
+ "vision_model.vision_model.encoder.layers.3.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
712
+ "vision_model.vision_model.encoder.layers.3.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
713
+ "vision_model.vision_model.encoder.layers.3.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
714
+ "vision_model.vision_model.encoder.layers.3.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
715
+ "vision_model.vision_model.encoder.layers.3.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
716
+ "vision_model.vision_model.encoder.layers.3.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
717
+ "vision_model.vision_model.encoder.layers.3.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
718
+ "vision_model.vision_model.encoder.layers.3.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
719
+ "vision_model.vision_model.encoder.layers.3.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
720
+ "vision_model.vision_model.encoder.layers.4.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
721
+ "vision_model.vision_model.encoder.layers.4.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
722
+ "vision_model.vision_model.encoder.layers.4.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
723
+ "vision_model.vision_model.encoder.layers.4.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
724
+ "vision_model.vision_model.encoder.layers.4.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
725
+ "vision_model.vision_model.encoder.layers.4.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
726
+ "vision_model.vision_model.encoder.layers.4.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
727
+ "vision_model.vision_model.encoder.layers.4.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
728
+ "vision_model.vision_model.encoder.layers.4.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
729
+ "vision_model.vision_model.encoder.layers.4.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
730
+ "vision_model.vision_model.encoder.layers.4.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
731
+ "vision_model.vision_model.encoder.layers.4.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
732
+ "vision_model.vision_model.encoder.layers.4.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
733
+ "vision_model.vision_model.encoder.layers.4.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
734
+ "vision_model.vision_model.encoder.layers.4.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
735
+ "vision_model.vision_model.encoder.layers.4.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
736
+ "vision_model.vision_model.encoder.layers.5.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
737
+ "vision_model.vision_model.encoder.layers.5.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
738
+ "vision_model.vision_model.encoder.layers.5.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
739
+ "vision_model.vision_model.encoder.layers.5.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
740
+ "vision_model.vision_model.encoder.layers.5.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
741
+ "vision_model.vision_model.encoder.layers.5.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
742
+ "vision_model.vision_model.encoder.layers.5.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
743
+ "vision_model.vision_model.encoder.layers.5.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
744
+ "vision_model.vision_model.encoder.layers.5.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
745
+ "vision_model.vision_model.encoder.layers.5.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
746
+ "vision_model.vision_model.encoder.layers.5.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
747
+ "vision_model.vision_model.encoder.layers.5.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
748
+ "vision_model.vision_model.encoder.layers.5.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
749
+ "vision_model.vision_model.encoder.layers.5.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
750
+ "vision_model.vision_model.encoder.layers.5.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
751
+ "vision_model.vision_model.encoder.layers.5.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
752
+ "vision_model.vision_model.encoder.layers.6.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
753
+ "vision_model.vision_model.encoder.layers.6.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
754
+ "vision_model.vision_model.encoder.layers.6.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
755
+ "vision_model.vision_model.encoder.layers.6.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
756
+ "vision_model.vision_model.encoder.layers.6.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
757
+ "vision_model.vision_model.encoder.layers.6.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
758
+ "vision_model.vision_model.encoder.layers.6.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
759
+ "vision_model.vision_model.encoder.layers.6.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
760
+ "vision_model.vision_model.encoder.layers.6.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
761
+ "vision_model.vision_model.encoder.layers.6.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
762
+ "vision_model.vision_model.encoder.layers.6.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
763
+ "vision_model.vision_model.encoder.layers.6.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
764
+ "vision_model.vision_model.encoder.layers.6.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
765
+ "vision_model.vision_model.encoder.layers.6.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
766
+ "vision_model.vision_model.encoder.layers.6.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
767
+ "vision_model.vision_model.encoder.layers.6.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
768
+ "vision_model.vision_model.encoder.layers.7.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
769
+ "vision_model.vision_model.encoder.layers.7.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
770
+ "vision_model.vision_model.encoder.layers.7.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
771
+ "vision_model.vision_model.encoder.layers.7.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
772
+ "vision_model.vision_model.encoder.layers.7.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
773
+ "vision_model.vision_model.encoder.layers.7.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
774
+ "vision_model.vision_model.encoder.layers.7.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
775
+ "vision_model.vision_model.encoder.layers.7.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
776
+ "vision_model.vision_model.encoder.layers.7.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
777
+ "vision_model.vision_model.encoder.layers.7.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
778
+ "vision_model.vision_model.encoder.layers.7.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
779
+ "vision_model.vision_model.encoder.layers.7.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
780
+ "vision_model.vision_model.encoder.layers.7.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
781
+ "vision_model.vision_model.encoder.layers.7.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
782
+ "vision_model.vision_model.encoder.layers.7.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
783
+ "vision_model.vision_model.encoder.layers.7.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
784
+ "vision_model.vision_model.encoder.layers.8.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
785
+ "vision_model.vision_model.encoder.layers.8.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
786
+ "vision_model.vision_model.encoder.layers.8.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
787
+ "vision_model.vision_model.encoder.layers.8.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
788
+ "vision_model.vision_model.encoder.layers.8.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
789
+ "vision_model.vision_model.encoder.layers.8.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
790
+ "vision_model.vision_model.encoder.layers.8.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
791
+ "vision_model.vision_model.encoder.layers.8.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
792
+ "vision_model.vision_model.encoder.layers.8.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
793
+ "vision_model.vision_model.encoder.layers.8.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
794
+ "vision_model.vision_model.encoder.layers.8.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
795
+ "vision_model.vision_model.encoder.layers.8.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
796
+ "vision_model.vision_model.encoder.layers.8.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
797
+ "vision_model.vision_model.encoder.layers.8.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
798
+ "vision_model.vision_model.encoder.layers.8.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
799
+ "vision_model.vision_model.encoder.layers.8.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
800
+ "vision_model.vision_model.encoder.layers.9.layer_norm1.bias": "pytorch_model-00001-of-00004.bin",
801
+ "vision_model.vision_model.encoder.layers.9.layer_norm1.weight": "pytorch_model-00001-of-00004.bin",
802
+ "vision_model.vision_model.encoder.layers.9.layer_norm2.bias": "pytorch_model-00001-of-00004.bin",
803
+ "vision_model.vision_model.encoder.layers.9.layer_norm2.weight": "pytorch_model-00001-of-00004.bin",
804
+ "vision_model.vision_model.encoder.layers.9.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
805
+ "vision_model.vision_model.encoder.layers.9.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
806
+ "vision_model.vision_model.encoder.layers.9.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
807
+ "vision_model.vision_model.encoder.layers.9.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
808
+ "vision_model.vision_model.encoder.layers.9.self_attn.k_proj.bias": "pytorch_model-00001-of-00004.bin",
809
+ "vision_model.vision_model.encoder.layers.9.self_attn.k_proj.weight": "pytorch_model-00001-of-00004.bin",
810
+ "vision_model.vision_model.encoder.layers.9.self_attn.out_proj.bias": "pytorch_model-00001-of-00004.bin",
811
+ "vision_model.vision_model.encoder.layers.9.self_attn.out_proj.weight": "pytorch_model-00001-of-00004.bin",
812
+ "vision_model.vision_model.encoder.layers.9.self_attn.q_proj.bias": "pytorch_model-00001-of-00004.bin",
813
+ "vision_model.vision_model.encoder.layers.9.self_attn.q_proj.weight": "pytorch_model-00001-of-00004.bin",
814
+ "vision_model.vision_model.encoder.layers.9.self_attn.v_proj.bias": "pytorch_model-00001-of-00004.bin",
815
+ "vision_model.vision_model.encoder.layers.9.self_attn.v_proj.weight": "pytorch_model-00001-of-00004.bin",
816
+ "vision_model.vision_model.head.attention.in_proj_bias": "pytorch_model-00001-of-00004.bin",
817
+ "vision_model.vision_model.head.attention.in_proj_weight": "pytorch_model-00001-of-00004.bin",
818
+ "vision_model.vision_model.head.attention.out_proj.bias": "pytorch_model-00001-of-00004.bin",
819
+ "vision_model.vision_model.head.attention.out_proj.weight": "pytorch_model-00001-of-00004.bin",
820
+ "vision_model.vision_model.head.layernorm.bias": "pytorch_model-00001-of-00004.bin",
821
+ "vision_model.vision_model.head.layernorm.weight": "pytorch_model-00001-of-00004.bin",
822
+ "vision_model.vision_model.head.mlp.fc1.bias": "pytorch_model-00001-of-00004.bin",
823
+ "vision_model.vision_model.head.mlp.fc1.weight": "pytorch_model-00001-of-00004.bin",
824
+ "vision_model.vision_model.head.mlp.fc2.bias": "pytorch_model-00001-of-00004.bin",
825
+ "vision_model.vision_model.head.mlp.fc2.weight": "pytorch_model-00001-of-00004.bin",
826
+ "vision_model.vision_model.head.probe": "pytorch_model-00001-of-00004.bin",
827
+ "vision_model.vision_model.post_layernorm.bias": "pytorch_model-00001-of-00004.bin",
828
+ "vision_model.vision_model.post_layernorm.weight": "pytorch_model-00001-of-00004.bin"
829
+ }
830
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<|fim_prefix|>",
5
+ "<|fim_middle|>",
6
+ "<|fim_suffix|>",
7
+ "<|endofprompt|>",
8
+ "<|_unuse_missing_100256|>",
9
+ "<|_unuse_missing_100261|>",
10
+ "<|_unuse_missing_100262|>",
11
+ "<|_unuse_missing_100263|>",
12
+ "<|_unuse_missing_100264|>",
13
+ "<|_unuse_missing_100265|>",
14
+ "<|_unuse_missing_100266|>",
15
+ "<|_unuse_missing_100267|>",
16
+ "<|_unuse_missing_100268|>",
17
+ "<|_unuse_missing_100269|>",
18
+ "<|_unuse_missing_100270|>",
19
+ "<|dummy3|>",
20
+ "<|im_start|>",
21
+ "<|im_end|>",
22
+ "<|stop|>",
23
+ "<|endofturn|>",
24
+ "<repo_name>",
25
+ "<file_sep>",
26
+ "<issue_start>",
27
+ "<issue_comment>",
28
+ "<issue_closed>",
29
+ "<jupyter_start>",
30
+ "<jupyter_text>",
31
+ "<jupyter_code>",
32
+ "<jupyter_output>",
33
+ "<jupyter_script>",
34
+ "<empty_output>",
35
+ "<code_to_intermediate>",
36
+ "<intermediate_to_code>",
37
+ "<pr>",
38
+ "<pr_status>",
39
+ "<pr_is_merged>",
40
+ "<pr_base>",
41
+ "<pr_file>",
42
+ "<pr_base_code>",
43
+ "<pr_diff>",
44
+ "<pr_diff_hunk>",
45
+ "<pr_comment>",
46
+ "<pr_event_id>",
47
+ "<pr_review>",
48
+ "<pr_review_state>",
49
+ "<pr_review_comment>",
50
+ "<pr_in_reply_to_review_id>",
51
+ "<pr_in_reply_to_comment_id>",
52
+ "<pr_diff_hunk_comment_line>",
53
+ "<NAME>",
54
+ "<EMAIL>",
55
+ "<KEY>",
56
+ "<PASSWORD>"
57
+ ],
58
+ "bos_token": {
59
+ "content": "<|endoftext|>",
60
+ "lstrip": false,
61
+ "normalized": false,
62
+ "rstrip": false,
63
+ "single_word": false
64
+ },
65
+ "eos_token": "<|endofturn|>",
66
+ "pad_token": {
67
+ "content": "<|endoftext|>",
68
+ "lstrip": false,
69
+ "normalized": false,
70
+ "rstrip": false,
71
+ "single_word": false
72
+ },
73
+ "unk_token": {
74
+ "content": "<|endoftext|>",
75
+ "lstrip": false,
76
+ "normalized": false,
77
+ "rstrip": false,
78
+ "single_word": false
79
+ }
80
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "100256": {
5
+ "content": "<|_unuse_missing_100256|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "100257": {
13
+ "content": "<|endoftext|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "100258": {
21
+ "content": "<|fim_prefix|>",
22
+ "lstrip": false,
23
+ "normalized": false,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "100259": {
29
+ "content": "<|fim_middle|>",
30
+ "lstrip": false,
31
+ "normalized": false,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "100260": {
37
+ "content": "<|fim_suffix|>",
38
+ "lstrip": false,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "100261": {
45
+ "content": "<|_unuse_missing_100261|>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "100262": {
53
+ "content": "<|_unuse_missing_100262|>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "100263": {
61
+ "content": "<|_unuse_missing_100263|>",
62
+ "lstrip": false,
63
+ "normalized": false,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "100264": {
69
+ "content": "<|_unuse_missing_100264|>",
70
+ "lstrip": false,
71
+ "normalized": false,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "100265": {
77
+ "content": "<|_unuse_missing_100265|>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "100266": {
85
+ "content": "<|_unuse_missing_100266|>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "100267": {
93
+ "content": "<|_unuse_missing_100267|>",
94
+ "lstrip": false,
95
+ "normalized": false,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "100268": {
101
+ "content": "<|_unuse_missing_100268|>",
102
+ "lstrip": false,
103
+ "normalized": false,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "100269": {
109
+ "content": "<|_unuse_missing_100269|>",
110
+ "lstrip": false,
111
+ "normalized": false,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "100270": {
117
+ "content": "<|_unuse_missing_100270|>",
118
+ "lstrip": false,
119
+ "normalized": false,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": true
123
+ },
124
+ "100271": {
125
+ "content": "<|dummy3|>",
126
+ "lstrip": false,
127
+ "normalized": false,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": true
131
+ },
132
+ "100272": {
133
+ "content": "<|im_start|>",
134
+ "lstrip": false,
135
+ "normalized": false,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": true
139
+ },
140
+ "100273": {
141
+ "content": "<|im_end|>",
142
+ "lstrip": false,
143
+ "normalized": false,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": true
147
+ },
148
+ "100274": {
149
+ "content": "<|stop|>",
150
+ "lstrip": false,
151
+ "normalized": false,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": true
155
+ },
156
+ "100275": {
157
+ "content": "<|endofturn|>",
158
+ "lstrip": false,
159
+ "normalized": false,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": true
163
+ },
164
+ "100276": {
165
+ "content": "<|endofprompt|>",
166
+ "lstrip": false,
167
+ "normalized": false,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": true
171
+ },
172
+ "110491": {
173
+ "content": "<repo_name>",
174
+ "lstrip": false,
175
+ "normalized": false,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": true
179
+ },
180
+ "110492": {
181
+ "content": "<file_sep>",
182
+ "lstrip": false,
183
+ "normalized": false,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": true
187
+ },
188
+ "110493": {
189
+ "content": "<issue_start>",
190
+ "lstrip": false,
191
+ "normalized": false,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": true
195
+ },
196
+ "110494": {
197
+ "content": "<issue_comment>",
198
+ "lstrip": false,
199
+ "normalized": false,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": true
203
+ },
204
+ "110495": {
205
+ "content": "<issue_closed>",
206
+ "lstrip": false,
207
+ "normalized": false,
208
+ "rstrip": false,
209
+ "single_word": false,
210
+ "special": true
211
+ },
212
+ "110496": {
213
+ "content": "<jupyter_start>",
214
+ "lstrip": false,
215
+ "normalized": false,
216
+ "rstrip": false,
217
+ "single_word": false,
218
+ "special": true
219
+ },
220
+ "110497": {
221
+ "content": "<jupyter_text>",
222
+ "lstrip": false,
223
+ "normalized": false,
224
+ "rstrip": false,
225
+ "single_word": false,
226
+ "special": true
227
+ },
228
+ "110498": {
229
+ "content": "<jupyter_code>",
230
+ "lstrip": false,
231
+ "normalized": false,
232
+ "rstrip": false,
233
+ "single_word": false,
234
+ "special": true
235
+ },
236
+ "110499": {
237
+ "content": "<jupyter_output>",
238
+ "lstrip": false,
239
+ "normalized": false,
240
+ "rstrip": false,
241
+ "single_word": false,
242
+ "special": true
243
+ },
244
+ "110500": {
245
+ "content": "<jupyter_script>",
246
+ "lstrip": false,
247
+ "normalized": false,
248
+ "rstrip": false,
249
+ "single_word": false,
250
+ "special": true
251
+ },
252
+ "110501": {
253
+ "content": "<empty_output>",
254
+ "lstrip": false,
255
+ "normalized": false,
256
+ "rstrip": false,
257
+ "single_word": false,
258
+ "special": true
259
+ },
260
+ "110502": {
261
+ "content": "<code_to_intermediate>",
262
+ "lstrip": false,
263
+ "normalized": false,
264
+ "rstrip": false,
265
+ "single_word": false,
266
+ "special": true
267
+ },
268
+ "110503": {
269
+ "content": "<intermediate_to_code>",
270
+ "lstrip": false,
271
+ "normalized": false,
272
+ "rstrip": false,
273
+ "single_word": false,
274
+ "special": true
275
+ },
276
+ "110504": {
277
+ "content": "<pr>",
278
+ "lstrip": false,
279
+ "normalized": false,
280
+ "rstrip": false,
281
+ "single_word": false,
282
+ "special": true
283
+ },
284
+ "110505": {
285
+ "content": "<pr_status>",
286
+ "lstrip": false,
287
+ "normalized": false,
288
+ "rstrip": false,
289
+ "single_word": false,
290
+ "special": true
291
+ },
292
+ "110506": {
293
+ "content": "<pr_is_merged>",
294
+ "lstrip": false,
295
+ "normalized": false,
296
+ "rstrip": false,
297
+ "single_word": false,
298
+ "special": true
299
+ },
300
+ "110507": {
301
+ "content": "<pr_base>",
302
+ "lstrip": false,
303
+ "normalized": false,
304
+ "rstrip": false,
305
+ "single_word": false,
306
+ "special": true
307
+ },
308
+ "110508": {
309
+ "content": "<pr_file>",
310
+ "lstrip": false,
311
+ "normalized": false,
312
+ "rstrip": false,
313
+ "single_word": false,
314
+ "special": true
315
+ },
316
+ "110509": {
317
+ "content": "<pr_base_code>",
318
+ "lstrip": false,
319
+ "normalized": false,
320
+ "rstrip": false,
321
+ "single_word": false,
322
+ "special": true
323
+ },
324
+ "110510": {
325
+ "content": "<pr_diff>",
326
+ "lstrip": false,
327
+ "normalized": false,
328
+ "rstrip": false,
329
+ "single_word": false,
330
+ "special": true
331
+ },
332
+ "110511": {
333
+ "content": "<pr_diff_hunk>",
334
+ "lstrip": false,
335
+ "normalized": false,
336
+ "rstrip": false,
337
+ "single_word": false,
338
+ "special": true
339
+ },
340
+ "110512": {
341
+ "content": "<pr_comment>",
342
+ "lstrip": false,
343
+ "normalized": false,
344
+ "rstrip": false,
345
+ "single_word": false,
346
+ "special": true
347
+ },
348
+ "110513": {
349
+ "content": "<pr_event_id>",
350
+ "lstrip": false,
351
+ "normalized": false,
352
+ "rstrip": false,
353
+ "single_word": false,
354
+ "special": true
355
+ },
356
+ "110514": {
357
+ "content": "<pr_review>",
358
+ "lstrip": false,
359
+ "normalized": false,
360
+ "rstrip": false,
361
+ "single_word": false,
362
+ "special": true
363
+ },
364
+ "110515": {
365
+ "content": "<pr_review_state>",
366
+ "lstrip": false,
367
+ "normalized": false,
368
+ "rstrip": false,
369
+ "single_word": false,
370
+ "special": true
371
+ },
372
+ "110516": {
373
+ "content": "<pr_review_comment>",
374
+ "lstrip": false,
375
+ "normalized": false,
376
+ "rstrip": false,
377
+ "single_word": false,
378
+ "special": true
379
+ },
380
+ "110517": {
381
+ "content": "<pr_in_reply_to_review_id>",
382
+ "lstrip": false,
383
+ "normalized": false,
384
+ "rstrip": false,
385
+ "single_word": false,
386
+ "special": true
387
+ },
388
+ "110518": {
389
+ "content": "<pr_in_reply_to_comment_id>",
390
+ "lstrip": false,
391
+ "normalized": false,
392
+ "rstrip": false,
393
+ "single_word": false,
394
+ "special": true
395
+ },
396
+ "110519": {
397
+ "content": "<pr_diff_hunk_comment_line>",
398
+ "lstrip": false,
399
+ "normalized": false,
400
+ "rstrip": false,
401
+ "single_word": false,
402
+ "special": true
403
+ },
404
+ "110520": {
405
+ "content": "<NAME>",
406
+ "lstrip": false,
407
+ "normalized": false,
408
+ "rstrip": false,
409
+ "single_word": false,
410
+ "special": true
411
+ },
412
+ "110521": {
413
+ "content": "<EMAIL>",
414
+ "lstrip": false,
415
+ "normalized": false,
416
+ "rstrip": false,
417
+ "single_word": false,
418
+ "special": true
419
+ },
420
+ "110522": {
421
+ "content": "<KEY>",
422
+ "lstrip": false,
423
+ "normalized": false,
424
+ "rstrip": false,
425
+ "single_word": false,
426
+ "special": true
427
+ },
428
+ "110523": {
429
+ "content": "<PASSWORD>",
430
+ "lstrip": false,
431
+ "normalized": false,
432
+ "rstrip": false,
433
+ "single_word": false,
434
+ "special": true
435
+ }
436
+ },
437
+ "additional_special_tokens": [
438
+ "<|endoftext|>",
439
+ "<|fim_prefix|>",
440
+ "<|fim_middle|>",
441
+ "<|fim_suffix|>",
442
+ "<|endofprompt|>",
443
+ "<|_unuse_missing_100256|>",
444
+ "<|_unuse_missing_100261|>",
445
+ "<|_unuse_missing_100262|>",
446
+ "<|_unuse_missing_100263|>",
447
+ "<|_unuse_missing_100264|>",
448
+ "<|_unuse_missing_100265|>",
449
+ "<|_unuse_missing_100266|>",
450
+ "<|_unuse_missing_100267|>",
451
+ "<|_unuse_missing_100268|>",
452
+ "<|_unuse_missing_100269|>",
453
+ "<|_unuse_missing_100270|>",
454
+ "<|dummy3|>",
455
+ "<|im_start|>",
456
+ "<|im_end|>",
457
+ "<|stop|>",
458
+ "<|endofturn|>",
459
+ "<repo_name>",
460
+ "<file_sep>",
461
+ "<issue_start>",
462
+ "<issue_comment>",
463
+ "<issue_closed>",
464
+ "<jupyter_start>",
465
+ "<jupyter_text>",
466
+ "<jupyter_code>",
467
+ "<jupyter_output>",
468
+ "<jupyter_script>",
469
+ "<empty_output>",
470
+ "<code_to_intermediate>",
471
+ "<intermediate_to_code>",
472
+ "<pr>",
473
+ "<pr_status>",
474
+ "<pr_is_merged>",
475
+ "<pr_base>",
476
+ "<pr_file>",
477
+ "<pr_base_code>",
478
+ "<pr_diff>",
479
+ "<pr_diff_hunk>",
480
+ "<pr_comment>",
481
+ "<pr_event_id>",
482
+ "<pr_review>",
483
+ "<pr_review_state>",
484
+ "<pr_review_comment>",
485
+ "<pr_in_reply_to_review_id>",
486
+ "<pr_in_reply_to_comment_id>",
487
+ "<pr_diff_hunk_comment_line>",
488
+ "<NAME>",
489
+ "<EMAIL>",
490
+ "<KEY>",
491
+ "<PASSWORD>"
492
+ ],
493
+ "bos_token": "<|endoftext|>",
494
+ "chat_template": [
495
+ {
496
+ "name": "default",
497
+ "template": "<|im_start|>tool_list\n<|im_end|>\n{% for message in messages %}\n{% set content = message['content'] %}\n{% set role = message['role'] %}\n{% if loop.first and role != 'system' %}\n<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n{% endif %}\n{% if message['content'] is string %}\n<|im_start|>{{ role }}\n{{ message['content'] }}<|im_end|>\n{% else %}\n{% if content['type'] == 'image' %}\n<|im_start|>{{ role }} (mime)\n{\"type\": \"image/jpeg\", \"filename\": \"{{ content['filename'] }}\"}<|im_end|>\n<|im_start|>{{ role }} (vector)\n<|dummy3|><|im_end|>\n<|im_start|>image/aux\n다음 중 ocr은 사진에서 검출된 글자이고, lens_keyword는 사진에서 추출된 keyword와 bbox 위치입니다. bbox는 0~1 사이로 정규화된 [x1, y1, x2, y2]의 형태입니다. 참고하여 답변하세요. {\"ocr\": \"{{ content['ocr'] or '' }}\", \"lens_keywords\": \"{{ content['lens_keywords'] or '' }}\", \"lens_local_keywords\": \"{{ content['lens_local_keywords'] or '' }}\"}<|im_end|>\n{% elif content['type'] == 'video' %}\n<|im_start|>{{ role }} (mime)\n{\"type\": \"video/mp4\", \"filename\": \"{{ content['filename'] }}\"}<|im_end|>\n<|im_start|>{{ role }} (vector)\n<|dummy3|><|im_end|>\n<|im_start|>image/aux\n{% if content.get('is_final_grid') %}\n다음 중 lens_keyword는 사진에서 추출된 keyword와 bbox 위치입니다. bbox는 0~1 사이로 정규화된 [x1, y1, x2, y2]의 형태입니다. video_time_stamp는 비디오에서 해당 구간의 시간 정보입니다. speech_to_text는 비디오 속에서의 대화, 음성, 소리, 대사, 그리고 말을 전부 글로 받아 적은 것 입니다. 참고하여 답변하세요. {\"video_time_stamp\": \"{{ content['video_time_stamp'] }}\", \"lens_keywords\": \"{{ content.get('lens_keywords', '') }}\", \"lens_local_keywords\": \"{{ content.get('lens_local_keywords', '') }}\", \"speech_to_text\": \"{{ content.get('speech_to_text', '') }}\"}\n{% else %}\n다음 중 video_time_stamp는 비디오에서 해당 구간의 시간 정보입니다. 참고하여 답변하세요. {\"video_time_stamp\": \"{{ content['video_time_stamp'] }}\"}\n{% endif %}<|im_end|>\n{% elif content['type'] == 'text' %}\n<|im_start|>{{ role }}\n{{ content['text'] }}<|im_end|>\n{% endif %}\n{% endif %}\n{% endfor %}\n{% if add_generation_prompt %}\n<|im_start|>assistant\n{% endif %}\n"
498
+ }
499
+ ],
500
+ "clean_up_tokenization_spaces": true,
501
+ "eos_token": "<|endofturn|>",
502
+ "extra_special_tokens": {},
503
+ "model_max_length": 1000000000000000019884624838656,
504
+ "pad_token": "<|endoftext|>",
505
+ "tokenizer_class": "GPT2Tokenizer",
506
+ "unk_token": "<|endoftext|>"
507
+ }