vxo hyperclovax commited on
Commit
bf24a0f
·
verified ·
0 Parent(s):

Duplicate from naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B

Browse files

Co-authored-by: HyperCLOVA X (admin) <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ HyperCLOVA X SEED Model License Agreement
2
+
3
+ Model Release Date: April 24, 2025
4
+
5
+ This HyperCLOVA X SEED Model License Agreement (the “Agreement”) is a legal agreement between you and NAVER Corporation and NAVER Cloud Corporation (“NAVER”) and governs your use of the Models that NAVER provides to You under this Agreement.
6
+
7
+ NAVER Corp., as the holder of the intellectual property of the Model, and its affiliate, NAVER Cloud Corp., as the exclusive business operator of HyperCLOVA X, enter into this Agreement with you. NAVER and you are each a “party” and collectively the “parties.”
8
+
9
+ By using, reproducing, modifying, distributing, performing or displaying any portion or element of the Model or Derivative Model, or otherwise accepting the terms of this Agreement, you agree to be bound by this Agreement. You represent to us that you are lawfully able to enter into contracts, and if you are entering into this Agreement for an entity, that you have legal authority to bind that entity.
10
+
11
+ 1. Definitions.
12
+
13
+ 1.1. "Affiliate” means any entity directly or indirectly controlling, controlled by or under common control with either party, where “control” means the possession, directly or indirectly, of the power to independently direct or cause the direction of the management and policies of an entity, whether through ownership of more than fifty percent (50%) of the stock or other equity interests entitled to vote for representation on its board of directors, or body performing similar functions, by contract or otherwise.
14
+
15
+ 1.2. “Derivative Model” means all (i) modifications to the Model, (ii) works based on the Model, or (iii) any other machine learning model which is created by transfer of patterns of the weights, parameters, operations, or Output of the Model, to that model in order to cause that model to perform similarly to the Model, including distillation methods that use intermediate data representations or methods based on the generation of synthetic data Outputs by the Model for training that Model. For clarity, Outputs are not deemed Derivative Model.
16
+
17
+ 1.3. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
18
+
19
+ 1.4. “Model” means the foundational large language models and software and algorithms, including machine-learning model code and trained model weights distributed by NAVER.
20
+
21
+ 1.5. “Output” means the information content output of the Model or a Derivative Model that results from operating or otherwise using the Model or Derivative Models.
22
+
23
+ 2. Conditions for Use, License Grant and Restrictions
24
+
25
+ 2.1. Conditions for Use. The Model and any Derivative Model are subject to the terms of this Agreement and govern your use. If You institute copyright or patent litigation against any entity (including a crossclaim or counterclaim in a lawsuit) alleging that the Model or a Derivative Model constitutes direct or contributory copyright or patent infringement, then any license granted to you under this Agreement for that Model or Derivative Model will terminate as of the date such litigation is filed. NAVER may update this Agreement to comply with legal and regulatory requirements any time and You agree to either comply with any updated license or cease your copying, use, and distribution of the Model and any Derivative Model.
26
+
27
+ 2.2. License Grant. Subject to the terms and conditions of this Agreement, NAVER hereby grants to you a non-exclusive, worldwide, non-transferable, revocable and royalty-free limited license under NAVER’s intellectual property or other rights owned by NAVER embodied in the Model to access, download, install, copy, use, reproduce, distribute, create derivative works of, and make modifications to the Model.
28
+
29
+ 2.3. Prohibited Use Policy. NAVER is committed to safety, trust and transparency in AI development. NAVER encourages You to (i) ensure that the product or service you develop, use, offer as a service or distributes meets the legal and ethical requirements of the relevant industry or use case, (ii) take reasonable measures to address unintended bias and to mitigate harm to others, including underrepresented or vulnerable groups, and (iii) inform users of the nature and limitations of the product or service. NAVER expressly prohibits the use of its products or services for any purpose in violation of applicable law and regulation, including but not limited to (a) illegal surveillance, (b) illegal collection or processing of biometric information without the consent of the subject where required under applicable law, or (c) illegal harassment, abuse, threatening or bullying of individuals or groups of individuals or intentionally misleading or deceiving others.
30
+
31
+ 3. Redistribution.
32
+
33
+ 3.1. You may reproduce, distribute or make available the Model or Derivative Models thereof, or a product or service (including another AI model) that contains any of them, if you meet all of the following conditions: you must (i) include the Prohibited Use Policy referenced in Section 2.3. as an enforceable provision in any agreement (e.g., license agreement, terms of use, etc.) governing the use and/or distribution of the Model or Derivative Model and you must provide notice to subsequence users you distribute to the Model or Derivative Models are subject to the use restrictions in Section 2.3., (ii) provide all third party recipients of the Model or Derivative Models a copy of this Agreement, (iii) cause any modified files to carry prominent notices stating that you modified the files; (iv) include the following attribution notice within a “Notice” text file distributed as part of such copies: “HyperCLOVA X SEED Model is licensed under the HyperCLOVA X SEED Model License Agreement, Copyright © NAVER Corp. All Rights Reserved.”, and (v) prominently display “Powered by HyperCLOVA X” on a related website, user interface, blogpost, about page, or product documentation. If you use the Model or any Outputs of the Model to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “HyperCLOVA X” at the beginning of any such AI model name.
34
+ 3.2. You may add your own copyright statement to your modifications and, except as set forth in this Section, may provide additional or different license terms and conditions for use, reproduction, or distribution of your modifications, or for any such Derivative Models as a whole, provided your use, reproduction, and distribution of the Model or Derivative Models otherwise comply with the terms and conditions stated in this Agreement. Any additional or different terms and conditions you impose must not conflict with the terms of this Agreement.
35
+
36
+ 4. Additional Commercial Terms. If (i) as of the Model Release Date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s Affiliates, is greater than 10 million monthly active users in the preceding calendar month, or (ii) the Licensee or its Affiliate distributes or makes available any product or service, which is substantially similar to or directly competes with any product and service provided by NAVER, then the Licensee must request a license from NAVER. Such license may be granted by NAVER at its sole discretion, and the Licensee is not authorized to exercise any rights under this Agreement unless and until NAVER expressly grants you such rights.
37
+
38
+ 5. Generated Output. NAVER claims no rights in Outputs you generate using the Model. You and your use are solely responsible for Outputs and their subsequent uses.
39
+
40
+ 6. DISCLAIMER OF WARRANTY. UNLESS REQUIRED BY APPLICABLE LAW, THE MODEL AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OR ANY KIND, AND NAVER DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE MODEL, DERIVATIVE MODELS, OUTPUTS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE MODEL AND ANY OUTPUTS AND RESULTS AND YOUR EXERCISE OF PERMISSION UNDER THIS AGREEMENT.
41
+
42
+ 7. LIMITATION OF LIABILITY. IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, UNLESS REQUIRED BY APPLICABLE LAW (SUCH AS IN CASES OF DELIBERATE AND GROSSLY NEGLIGENT ACTS), WILL NAVER BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY, OR PUNITIVE DAMAGES, OR LOST PROFITS OF ANY KIND, ARISING FROM OR RELATED TO THIS AGREEMENT, OR RESULTING FROMTHE USE OR INABILITY TO USE THE MODEL, DERIVATIVE MODELS OR, OUTPUTS (INCLUDING, BUT NOT LIMITED TO, DAMAGES FOR LOSS OF GOODWILL, WORK STOPPAGES, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES), EVEN IF NAVER HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
43
+
44
+ 8. Indemnity. You will indemnify and hold harmless NAVER from and against any claim by any third party arising out of or related to your use or distribution of the Model, Derivative Model or Outputs.
45
+
46
+ 9. Intellectual Property.
47
+
48
+ 9.1. This Agreement does not grant permission to use the trade names, trademarks, service marks, or product names of NAVER, except as required for reasonable and customary use in describing the origin of the Model and reproducing the content of the “Notice” text file.
49
+
50
+ 9.2. NAVER Corp. owns the Model and any Derivative Model created by NAVER Corp. Except as expressively granted in this Agreement, NAVER Corp. reserves all rights, interests and remedies in connection with the Model and Derivative Model created by NAVER Corp. and no other license or right is granted to you by implication, estoppel or otherwise. Subject to NAVER Corp.’s ownership of the Model and any Derivative Model made by or for NAVER Corp., with respect to any derivative works and modifications of the Model that are made by you, as between you and NAVER Corp., you are and will be the owner of such derivative works and modifications.
51
+
52
+ 10. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Model and will continue in full force and effect until terminated in accordance with the terms and conditions of this Agreement. NAVER may terminate this Agreement if you breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Model and Derivative Model. Section 5, 6, 7 and 10 shall survive the termination of this Agreement.
53
+
54
+ 11. Governing Law and Jurisdiction.
55
+
56
+ 11.1. This Agreement will be governed by and construed in accordance with the laws of the Republic of Korea, without regard to its conflicts of laws principles.
57
+
58
+ 11.2. Any disputes, controversies, or claims arising out of or relating to this Agreement, including its existence, validity, interpretation, performance, breach, or termination, shall be referred to and finally resolved by arbitration administered by the Korean Commercial Arbitration Board (KCAB) in accordance with the International Arbitration Rules of the Korean Commercial Arbitration Board in force at the time of the commencement of the arbitration. The seat of arbitration shall be Seoul, Republic of Korea. The tribunal shall consist of one arbitrator. The language of the arbitration shall be English. Either party may seek interim or provisional relief from a court of competent jurisdiction, and doing so shall not be considered a waiver of any provision in this section. The arbitral tribunal also has the authority to issue orders for interim or provisional relief.
59
+
60
+ 12. Modifications. NAVER reserves the right to modify or amend this Agreement at any time, in its sole discretion. Any modifications will be effective upon posting the updated Agreement on our website or through other means of communication. You are responsible for reviewing the Agreement periodically for changes.
61
+
62
+ 13. No Waiver. NAVER will not be treated as having waived any rights by not exercising (or delaying the exercise of) any rights under this Agreement.
README.md ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: hyperclovax-seed
4
+ license_link: LICENSE
5
+ library_name: transformers
6
+ ---
7
+
8
+
9
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6512d9827fccffe1e9e28fa7/Lra7yfdthGdKcNk7vP5RS.png)
10
+
11
+
12
+ ## **Overview**
13
+
14
+ HyperCLOVAX-SEED-Vision-Instruct-3B is a model developed by NAVER, built upon its proprietary backbone model and fine-tuned through post-training. It is capable of understanding both text and images, as well as generating text.
15
+
16
+ The model is primarily designed with a focus on lightweight architecture, optimizing computational efficiency. In terms of visual understanding, it can handle visual question answering (VQA), chart and diagram interpretation, and even comprehend content. HyperCLOVAX-SEED-Vision-Instruct-3B aims for a Pareto-optimal balance specifically tuned for the Korean language, and it demonstrates competitive performance using fewer visual tokens compared to other models of similar size in inference scenarios.
17
+
18
+ Particularly, the model shows relative strengths in handling Korean-language inputs and outperforms similarly sized open-source models in related benchmarks. As the first open-source vision-language model in Korea capable of visual understanding, it is expected to significantly contribute to strengthening Korea's sovereign AI capabilities.
19
+
20
+
21
+ ## **Updates**
22
+ - **(2025.07.25)**: vLLM engine is available with [our repository](https://github.com/NAVER-Cloud-HyperCLOVA-X/vllm/tree/v0.9.2rc2_hyperclovax_vision_seed)
23
+ - **(2025.07.08)**: Major code update for supporting vLLM engine ([link - related_discussion](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B/discussions/27))
24
+ - **(2025.04.22)**: Initial release of the repository.
25
+
26
+
27
+ ## **Basic Information**
28
+
29
+ - **Model Architecture**: LLaVA-based Vision-Language Model
30
+ - **LLM Module**: Transformer-based architecture (Dense Model)
31
+ - **Vision Encoder** : SigLIP-based architecture with 378x378px input resolution per grid.
32
+ - **Vision-Language Connector** : C-Abstractor based architecture with AnyRes mechanism, supporting up to 1.29M total pixels across 9 grids.
33
+ - **Parameter Count**: 3.2B (LLM Module) + 0.43B (Vision Module)
34
+ - **Input/Output Format**: Text + Image + Video / Text
35
+ - **Context Length**: 16k
36
+ - **Knowledge Cutoff Date**: The model was trained on data collected before August 2024.
37
+
38
+
39
+ ## **Training**
40
+
41
+ #### **Text**
42
+
43
+ Securing high-quality data is essential even during post-training, but having humans manually create or revise large-scale datasets posed significant limitations in terms of both cost and resources. Additionally, tasks requiring domain expertise were difficult to handle, and the risk of human error was high. To overcome these challenges, we utilized an automated validation system powered by HyperCLOVA X, which improved data quality and streamlined the training process — ultimately leading to enhanced overall model performance. As a result, the model showed significant improvements in areas with definitive answers, such as mathematics and coding.
44
+
45
+ While reducing the cost of data collection is important, finding efficient training strategies is equally critical. HyperCLOVAX-SEED-Vision-Instruct-3B was developed starting from the HyperCLOVAX-SEED-Text-Base-3B and applied both Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) based on an online reinforcement algorithm called GRPO.
46
+
47
+ #### **Vision**
48
+
49
+ The Vision Understanding feature — where the model receives images and questions as input and generates text-based answers — was not part of the initial design of HyperCLOVA X. Therefore, the model architecture was carefully designed to add capabilities for handling vision-related tasks, such as image-based question answering (VQA) and chart/diagram interpretation, without compromising the existing performance of the HCX LLM. Special attention was given to handling auxiliary information within the input, especially considering the context length.
50
+
51
+ Although HyperCLOVAX-SEED-Vision-Instruct-3B is a lightweight model, it is capable of performing basic image VQA tasks and even supports OCR-free processing. One of the key focus areas for this 3B model was optimizing the efficiency of video input tokens. Since input token length directly affects computational cost, the number of tokens extracted per frame was carefully adjusted to enable efficient video understanding with as few tokens as possible. Additionally, during the RLHF training phase, vision-specific V-RLHF data was used to enhance the model’s learning, just like in the text domain.
52
+
53
+ ## Benchmark
54
+ #### Text
55
+
56
+ | **Model** | **KMMLU (5-shot, acc)** | **HAE-RAE (5-shot, acc)** | **CLiCK (5-shot, acc)** | **KoBEST (5-shot, acc)** |
57
+ |----------------------------|--------|---------|---------|-------|
58
+ | HyperCLOVAX-SEED-Text-Base-3B | 0.4847 | 0.7635 | 0.6386 | 0.7792 |
59
+ | HyperCLOVAX-SEED-Vision-Instruct-3B| 0.4422 | 0.6499 | 0.5599 | 0.7180 |
60
+ | Qwen2.5-3B-instruct | 0.4451 | 0.6031 | 0.5649 | 0.7053 |
61
+ | gemma-3-4b-it | 0.3895 | 0.6059 | 0.5303 | 0.7262 |
62
+
63
+ #### Vision
64
+
65
+ | Model Name | Max Token Count per Video | VideoMME (Ko) | NAVER-TV-CLIP (Ko) | VideoChatGPT (Ko) | PerceptionTest (En) | ActivityNet-QA (En) | KoNet (Ko) | MMBench-Val (En) | TextVQA-Val (En) | Korean VisIT-Bench (Ko) | Image (4 benchmarks) | Video (5 benchmarks) | All (9 benchmarks) |
66
+ |-----------------------------------|--------------------------------|----------------|---------------------|--------------------|-----------------------|----------------------|------------|-------------------|-------------------|--------------------------|------------------------|------------------------|----------------------|
67
+ | HyperCLOVAX-SEED-Vision-Instruct-3B | 1856 tokens, 108 frames | 48.2 | 61.0 | 53.6 | 55.2 | 50.6 | 69.2 | 81.8 | 79.2 | 37.0 | 46.68 | 53.70 | 59.54 |
68
+ | HyperCLOVAX-SEED-Vision-Instruct-3B (without OCR)| 1856 tokens, 108 frames | 48.2 | 61.0 | 53.6 | 55.2 | 50.6 | 36.6 | 80.7 | 76.0 | 43.5 | 56.74 | 53.70 | 55.05 |
69
+ | Qwen-2.5-VL-3B | 24576 tokens, 768 frames | 55.1 | 48.3 | 45.6 | 66.9 | 55.7 | 58.3 | 84.3 | 79.6 | 81.5 | 59.35 | 54.31 | 56.55 |
70
+ | Qwen-2.5-VL-3B (w/ 2000 tokens) | 2000 tokens, 128 frames | 50.3 | 43.9 | 44.3 | 58.3 | 54.2 | 58.5 | 84.3 | 79.3 | 15.7 | 59.50 | 50.18 | 54.33 |
71
+ | Qwen-2.5-VL-7B | 24576 tokens, 768 frames | 60.6 | 66.7 | 51.8 | 70.5 | 56.6 | 68.4 | 88.3 | 84.9 | 85.6 | 69.34 | 61.23 | 64.84 |
72
+ | Gemma-3-4B | 4096 tokens, 16 frames | 45.4 | 36.8 | 57.1 | 50.6 | 46.3 | 25.0 | 79.2 | 58.9 | 32.3 | 48.91 | 47.24 | 47.98 |
73
+ | GPT4V (gpt-4-turbo-2024-04-09) | Unknown, Original Image , 8 frames | 49.1 | 75.0 | 55.5 | 57.4 | 45.7 | 38.7 | 84.2 | 60.4 | 52.0 | 58.88 | 51.59 | 54.83 |
74
+ | GPT4o (gpt-4o-2024-08-06) | Unknown, 512 resize, 128 frames| 61.6 | 66.6 | 61.8 | 50.2 | 41.7 | 60.6 | 84.2 | 73.2 | 50.5 | 67.15 | 56.42 | 61.19 |
75
+ | InternV-2-2B | 4096 tokens, 16 frames | 28.9 | 21.1 | 40.2 | 50.5 | 50.3 | 3.3 | 79.3 | 75.1 | 51.1 | 39.74 | 38.19 | 38.88 |
76
+ | InternV-2-4B | 4096 tokens, 16 frames | 33.8 | 36.0 | 22.8 | 54.2 | 52.0 | 22.7 | 83.0 | 76.9 | 51.6 | 46.11 | 39.75 | 42.58 |
77
+ | InternV-2-8B | 4096 tokens, 16 frames | 43.7 | 41.2 | 32.4 | 58.5 | 53.2 | 28.5 | 86.6 | 79.0 | 97.0 | 50.32 | 45.79 | 47.81 |
78
+
79
+ ## Dependencies
80
+ - [einops](https://einops.rocks/)
81
+ - [timm](https://github.com/huggingface/pytorch-image-models)
82
+ - [av](https://github.com/PyAV-Org/PyAV)
83
+ - [decord](https://github.com/dmlc/decord)
84
+
85
+ ## Example
86
+ **(code & benchmark score) checked with transformers 4.52.4**
87
+
88
+ ```python
89
+
90
+ from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
91
+
92
+ model_name = "naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B"
93
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True).to(device="cuda")
94
+ processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
95
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
96
+
97
+ # LLM Example
98
+ # It is recommended to use the chat template with HyperCLOVAX models.
99
+ # Using the chat template allows you to easily format your input in ChatML style.
100
+ llm_chat = [
101
+ {"role": "system", "content": [{"type": "text", "text": "you are helpful assistant!"}]},
102
+ {
103
+ "role": "user",
104
+ "content": [
105
+ {"type": "text", "text": "Hello, how are you?"},
106
+ {"type": "text", "text": "I said. Hello, how are you today?"},
107
+ ]
108
+ },
109
+ {"role": "assistant", "content": [{"type": "text", "text": "I'm doing great. How can I help you today?"}]},
110
+ {"role": "user", "content": [{"type": "text", "text": "I'd like to show off how chat templating works!"}]},
111
+ ]
112
+ model_inputs = processor.apply_chat_template(
113
+ llm_chat, tokenize=True, return_dict=True, return_tensors="pt", add_generation_prompt=True
114
+ )
115
+ model_inputs = model_inputs.to(device="cuda")
116
+
117
+ # Please adjust parameters like top_p appropriately for your use case.
118
+ output_ids = model.generate(
119
+ **model_inputs,
120
+ max_new_tokens=64,
121
+ do_sample=True,
122
+ top_p=0.6,
123
+ temperature=0.5,
124
+ repetition_penalty=1.0,
125
+ )
126
+ print("=" * 80)
127
+ print("LLM EXAMPLE")
128
+ print(processor.batch_decode(output_ids)[0])
129
+ print("=" * 80)
130
+
131
+ # VLM Example
132
+ # For images and videos, you can use url, local_path, base64, or bytes as input sources.
133
+ vlm_chat = [
134
+ {"role": "system", "content": [{"text": "System Prompt", "type": "text"}]},
135
+ {"role": "user", "content": [{"text": "User Text Prompt 1", "type": "text"}]},
136
+ {
137
+ "role": "user",
138
+ "content": [{
139
+ "filename": "tradeoff_sota.png",
140
+ "image": "https://github.com/naver-ai/rdnet/blob/main/resources/images/tradeoff_sota.png?raw=true",
141
+ "lens_keywords": "Gucci Ophidia, cross bag, Ophidia small, GG, Supreme shoulder bag",
142
+ "lens_local_keywords": "[0.07, 0.21, 0.92, 0.90] Gucci Ophidia",
143
+ "ocr": "List the words in the image in raster order. Even if the word order feels unnatural for reading, the model will handle it as long as it follows raster order.", "type": "image",
144
+ }],
145
+ },
146
+ {
147
+ "role": "user",
148
+ "content": [{
149
+ "filename": "tradeoff.png",
150
+ "image": "https://github.com/naver-ai/rdnet/blob/main/resources/images/tradeoff.png?raw=true",
151
+ "type": "image",
152
+ }],
153
+ },
154
+ {"role": "assistant", "content": [{"text": "Assistant Text Prompt 1", "type": "text"}]},
155
+ {"role": "user", "content": [{"text": "User Text Prompt 2", "type": "text"}]},
156
+ {
157
+ "role": "user",
158
+ "content": [
159
+ {
160
+ "type": "video",
161
+ "video": "freenaturestock-rolling-mist-clouds.mp4",
162
+ "lens_keywords": "Prada re-edition, nylon bag, mini cross bag, logo strap, essential shoulder bag",
163
+ "lens_local_keywords": "[0.12, 0.34, 0.85, 0.76] Prada re-edition",
164
+ "speech_to_text": "Please enter the dialogue, voice, sound, lines, and words in the video in text format.",
165
+ },
166
+ {"text": "User Text Prompt 3", "type": "text"},
167
+ ]
168
+ },
169
+ ]
170
+
171
+ model_inputs = processor.apply_chat_template(
172
+ vlm_chat, tokenize=True, return_dict=True, return_tensors="pt", add_generation_prompt=True,
173
+ )
174
+ model_inputs = model_inputs.to(device="cuda")
175
+ output_ids = model.generate(
176
+ **model_inputs,
177
+ max_new_tokens=64,
178
+ do_sample=True,
179
+ top_p=0.6,
180
+ temperature=0.5,
181
+ repetition_penalty=1.0,
182
+ )
183
+ print("=" * 80)
184
+ print("VLM EXAMPLE")
185
+ print(processor.batch_decode(output_ids)[0])
186
+ print("=" * 80)
187
+
188
+ ```
189
+
190
+ ## Example for v0.1.0
191
+ **(code & benchmark score) checked with transformers 4.45.0**
192
+
193
+ ```python
194
+
195
+ from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
196
+
197
+ model_name = "naver-hyperclovax/HyperCLOVAX-SEED-Vision-Instruct-3B"
198
+ revision="v0.1.0"
199
+ model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, revision=revision).to(device="cuda")
200
+ preprocessor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True, revision=revision)
201
+ tokenizer = AutoTokenizer.from_pretrained(model_name, revision=revision)
202
+
203
+ # LLM Example
204
+ # It is recommended to use the chat template with HyperCLOVAX models.
205
+ # Using the chat template allows you to easily format your input in ChatML style.
206
+ chat = [
207
+ {"role": "system", "content": "you are helpful assistant!"},
208
+ {"role": "user", "content": "Hello, how are you?"},
209
+ {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
210
+ {"role": "user", "content": "I'd like to show off how chat templating works!"},
211
+ ]
212
+ input_ids = tokenizer.apply_chat_template(chat, return_tensors="pt", tokenize=True)
213
+ input_ids = input_ids.to(device="cuda")
214
+
215
+ # Please adjust parameters like top_p appropriately for your use case.
216
+ output_ids = model.generate(
217
+ input_ids,
218
+ max_new_tokens=64,
219
+ do_sample=True,
220
+ top_p=0.6,
221
+ temperature=0.5,
222
+ repetition_penalty=1.0,
223
+ )
224
+ print("=" * 80)
225
+ print("LLM EXAMPLE")
226
+ print(tokenizer.batch_decode(output_ids)[0])
227
+ print("=" * 80)
228
+
229
+ # VLM Example
230
+ # For image and video inputs, you can use url, local_path, base64, or bytes.
231
+ vlm_chat = [
232
+ {"role": "system", "content": {"type": "text", "text": "System Prompt"}},
233
+ {"role": "user", "content": {"type": "text", "text": "User Text 1"}},
234
+ {
235
+ "role": "user",
236
+ "content": {
237
+ "type": "image",
238
+ "filename": "tradeoff_sota.png",
239
+ "image": "https://github.com/naver-ai/rdnet/blob/main/resources/images/tradeoff_sota.png?raw=true",
240
+ "ocr": "List the words in the image in raster order. Even if the word order feels unnatural for reading, the model will handle it as long as it follows raster order.",
241
+ "lens_keywords": "Gucci Ophidia, cross bag, Ophidia small, GG, Supreme shoulder bag",
242
+ "lens_local_keywords": "[0.07, 0.21, 0.92, 0.90] Gucci Ophidia",
243
+ }
244
+ },
245
+ {
246
+ "role": "user",
247
+ "content": {
248
+ "type": "image",
249
+ "filename": "tradeoff.png",
250
+ "image": "https://github.com/naver-ai/rdnet/blob/main/resources/images/tradeoff.png?raw=true",
251
+ }
252
+ },
253
+ {"role": "assistant", "content": {"type": "text", "text": "Assistant Text 1"}},
254
+ {"role": "user", "content": {"type": "text", "text": "User Text 2"}},
255
+ {
256
+ "role": "user",
257
+ "content": {
258
+ "type": "video",
259
+ "filename": "rolling-mist-clouds.mp4",
260
+ "video": "freenaturestock-rolling-mist-clouds.mp4",
261
+ }
262
+ },
263
+ {"role": "user", "content": {"type": "text", "text": "User Text 3"}},
264
+ ]
265
+
266
+ new_vlm_chat, all_images, is_video_list = preprocessor.load_images_videos(vlm_chat)
267
+ preprocessed = preprocessor(all_images, is_video_list=is_video_list)
268
+ input_ids = tokenizer.apply_chat_template(
269
+ new_vlm_chat, return_tensors="pt", tokenize=True, add_generation_prompt=True,
270
+ )
271
+
272
+ output_ids = model.generate(
273
+ input_ids=input_ids.to(device="cuda"),
274
+ max_new_tokens=8192,
275
+ do_sample=True,
276
+ top_p=0.6,
277
+ temperature=0.5,
278
+ repetition_penalty=1.0,
279
+ **preprocessed,
280
+ )
281
+ print("=" * 80)
282
+ print("VLM EXAMPLE")
283
+ print(tokenizer.batch_decode(output_ids)[0])
284
+ print("=" * 80)
285
+ ```
286
+
287
+ - To ensure the highest level of image understanding performance, it is recommended to include additional information such as Optical Character Recognition (OCR) results and entity recognition (Lens). The provided usage examples are written under the assumption that OCR and Lens results are available. If you input data in this format, you can expect significantly improved output quality.
288
+
289
+ ## vLLM
290
+ To speed up your inference, you can use the vLLM engine from [our repository](https://github.com/NAVER-Cloud-HyperCLOVA-X/vllm/tree/v0.9.2rc2_hyperclovax_vision_seed).
291
+
292
+ Make sure to switch to the `v0.9.2rc2_hyperclovax_vision_seed` branch.
293
+
294
+ **Launch API server**:
295
+ - https://oss.navercorp.com/HYPERSCALE-AI-VISION/vllm/blob/main/README.md
296
+
297
+ **Request Example**:
298
+ - https://github.com/vllm-project/vllm/pull/20931#issue-3229161410
299
+
300
+ **Offline Inference Examples**:
301
+ - https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/vision_language.py
302
+ - https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/vision_language_multi_image.py
added_tokens.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<EMAIL>": 110521,
3
+ "<KEY>": 110522,
4
+ "<NAME>": 110520,
5
+ "<PASSWORD>": 110523,
6
+ "<code_to_intermediate>": 110502,
7
+ "<empty_output>": 110501,
8
+ "<file_sep>": 110492,
9
+ "<intermediate_to_code>": 110503,
10
+ "<issue_closed>": 110495,
11
+ "<issue_comment>": 110494,
12
+ "<issue_start>": 110493,
13
+ "<jupyter_code>": 110498,
14
+ "<jupyter_output>": 110499,
15
+ "<jupyter_script>": 110500,
16
+ "<jupyter_start>": 110496,
17
+ "<jupyter_text>": 110497,
18
+ "<pr>": 110504,
19
+ "<pr_base>": 110507,
20
+ "<pr_base_code>": 110509,
21
+ "<pr_comment>": 110512,
22
+ "<pr_diff>": 110510,
23
+ "<pr_diff_hunk>": 110511,
24
+ "<pr_diff_hunk_comment_line>": 110519,
25
+ "<pr_event_id>": 110513,
26
+ "<pr_file>": 110508,
27
+ "<pr_in_reply_to_comment_id>": 110518,
28
+ "<pr_in_reply_to_review_id>": 110517,
29
+ "<pr_is_merged>": 110506,
30
+ "<pr_review>": 110514,
31
+ "<pr_review_comment>": 110516,
32
+ "<pr_review_state>": 110515,
33
+ "<pr_status>": 110505,
34
+ "<repo_name>": 110491
35
+ }
chat_template.jinja ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <|im_start|>tool_list
2
+ <|im_end|>
3
+ {% for message in messages %}
4
+ {% set content = message['content'] %}
5
+ {% set role = message['role'] %}
6
+ {% if loop.first and role != 'system' %}
7
+ <|im_start|>system
8
+ You are a helpful assistant.<|im_end|>
9
+ {% endif %}
10
+ {% if message['content'] is string %}
11
+ <|im_start|>{{ role }}
12
+ {{ message['content'] }}<|im_end|>
13
+ {% elif message['content'] is mapping %}
14
+ {% if content['type'] == 'image' %}
15
+ <|im_start|>{{ role }} (mime)
16
+ {"type": "image/jpeg", "filename": "{{ content['filename'] }}"}<|im_end|>
17
+ <|im_start|>{{ role }} (vector)
18
+ <|dummy3|><|im_end|>
19
+ <|im_start|>image/aux
20
+ 다음 중 ocr은 사진에서 검출된 글자이고, lens_keyword는 사진에서 추출된 keyword와 bbox 위치입니다. bbox는 0~1 사이로 정규화된 [x1, y1, x2, y2]의 형태입니다. 참고하여 답변하세요. {"ocr": "{{ content['ocr'] or '' }}", "lens_keywords": "{{ content['lens_keywords'] or '' }}", "lens_local_keywords": "{{ content['lens_local_keywords'] or '' }}"}<|im_end|>
21
+ {% elif content['type'] == 'video' %}
22
+ <|im_start|>{{ role }} (mime)
23
+ {"type": "video/mp4", "filename": "{{ content['filename'] }}"}<|im_end|>
24
+ <|im_start|>{{ role }} (vector)
25
+ <|_unuse_missing_100270|><|im_end|>
26
+ <|im_start|>image/aux
27
+ {% if content.get('is_final_grid') %}
28
+ 다음 중 lens_keyword는 사진에서 추출된 keyword와 bbox 위치입니다. bbox는 0~1 사이로 정규화된 [x1, y1, x2, y2]의 형태입니다. video_time_stamp는 비디오에서 해당 구간의 시간 정보입니다. speech_to_text는 비디오 속에서의 대화, 음성, 소리, 대사, 그리고 말을 전부 글로 받아 적은 것 입니다. 참고하여 답변하세요. {"video_time_stamp": "{{ content['video_time_stamp'] }}", "lens_keywords": "{{ content.get('lens_keywords', '') }}", "lens_local_keywords": "{{ content.get('lens_local_keywords', '') }}", "speech_to_text": "{{ content.get('speech_to_text', '') }}"}
29
+ {% else %}
30
+ 다음 중 video_time_stamp는 비디오에서 해당 구간의 시간 정보입니다. 참고하여 답변하세요. {"video_time_stamp": "{{ content['video_time_stamp'] }}"}
31
+ {% endif %}<|im_end|>
32
+ {% elif content['type'] == 'text' %}
33
+ <|im_start|>{{ role }}
34
+ {{ content['text'] }}<|im_end|>
35
+ {% endif %}
36
+ {% elif message['content'] is sequence %}
37
+ {% for content in message['content'] %}
38
+ {% if content['type'] == 'image' %}
39
+ <|im_start|>{{ role }} (mime)
40
+ {"type": "image/jpeg", "filename": "{{ content['filename'] }}"}<|im_end|>
41
+ <|im_start|>{{ role }} (vector)
42
+ <|dummy3|><|im_end|>
43
+ <|im_start|>image/aux
44
+ 다음 중 ocr은 사진에서 검출된 글자이고, lens_keyword는 사진에서 추출된 keyword와 bbox 위치입니다. bbox는 0~1 사이로 정규화된 [x1, y1, x2, y2]의 형태입니다. 참고하여 답변하세요. {"ocr": "{{ content['ocr'] or '' }}", "lens_keywords": "{{ content['lens_keywords'] or '' }}", "lens_local_keywords": "{{ content['lens_local_keywords'] or '' }}"}<|im_end|>
45
+ {% elif content['type'] == 'video' %}
46
+ <|im_start|>{{ role }} (mime)
47
+ {"type": "video/mp4", "filename": "{{ content['filename'] }}"}<|im_end|>
48
+ <|im_start|>{{ role }} (vector)
49
+ <|_unuse_missing_100270|><|im_end|>
50
+ <|im_start|>image/aux
51
+ {% if content.get('is_final_grid') %}
52
+ 다음 중 lens_keyword는 사진에서 추출된 keyword와 bbox 위치입니다. bbox는 0~1 사이로 정규화된 [x1, y1, x2, y2]의 형태입니다. video_time_stamp는 비디오에서 해당 구간의 시간 정보입니다. speech_to_text는 비디오 속에서의 대화, 음성, 소리, 대사, 그리고 말을 전부 글로 받아 적은 것 입니다. 참고하여 답변하세요. {"video_time_stamp": "{{ content['video_time_stamp'] }}", "lens_keywords": "{{ content.get('lens_keywords', '') }}", "lens_local_keywords": "{{ content.get('lens_local_keywords', '') }}", "speech_to_text": "{{ content.get('speech_to_text', '') }}"}
53
+ {% else %}
54
+ 다음 중 video_time_stamp는 비디오에서 해당 구간의 시간 정보입니다. 참고하여 답변하세요. {"video_time_stamp": "{{ content['video_time_stamp'] }}"}
55
+ {% endif %}<|im_end|>
56
+ {% elif content['type'] == 'text' %}
57
+ <|im_start|>{{ role }}
58
+ {{ content['text'] }}<|im_end|>
59
+ {% endif %}
60
+ {% endfor %}
61
+ {% endif %}
62
+ {% endfor %}
63
+ {% if add_generation_prompt %}
64
+ <|im_start|>assistant
65
+ {% endif %}
config.json ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "anyres": true,
3
+ "architectures": [
4
+ "HCXVisionForCausalLM"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_hyperclovax.HCXVisionConfig",
8
+ "AutoModelForCausalLM": "modeling_hyperclovax.HCXVisionForCausalLM"
9
+ },
10
+ "decoder_max_length": 16384,
11
+ "freeze_decoder": false,
12
+ "freeze_encoder": true,
13
+ "freeze_mm_projector": false,
14
+ "hidden_size": 3072,
15
+ "ignore_index": -100,
16
+ "video_token_id": 100270,
17
+ "image_token_id": 100271,
18
+ "mm_projector_type": "cabstractor",
19
+ "text_config": {
20
+ "_attn_implementation_autoset": true,
21
+ "_name_or_path": "",
22
+ "add_cross_attention": false,
23
+ "architectures": [
24
+ "LlamaForCausalLM"
25
+ ],
26
+ "attention_bias": false,
27
+ "attention_dropout": 0.0,
28
+ "bad_words_ids": null,
29
+ "begin_suppress_tokens": null,
30
+ "bos_token_id": 100257,
31
+ "chunk_size_feed_forward": 0,
32
+ "cross_attention_hidden_size": null,
33
+ "decoder_start_token_id": null,
34
+ "diversity_penalty": 0.0,
35
+ "do_sample": false,
36
+ "early_stopping": false,
37
+ "encoder_no_repeat_ngram_size": 0,
38
+ "end_token_id": 100257,
39
+ "eos_token_id": 100257,
40
+ "exponential_decay_length_penalty": null,
41
+ "finetuning_task": null,
42
+ "forced_bos_token_id": null,
43
+ "forced_eos_token_id": null,
44
+ "head_dim": 128,
45
+ "hidden_act": "silu",
46
+ "hidden_size": 3072,
47
+ "id2label": {
48
+ "0": "LABEL_0",
49
+ "1": "LABEL_1"
50
+ },
51
+ "initializer_range": 0.02,
52
+ "intermediate_size": 7168,
53
+ "is_decoder": false,
54
+ "is_encoder_decoder": false,
55
+ "label2id": {
56
+ "LABEL_0": 0,
57
+ "LABEL_1": 1
58
+ },
59
+ "length_penalty": 1.0,
60
+ "logits_scaling": 1.0,
61
+ "max_length": 20,
62
+ "max_position_embeddings": 131072,
63
+ "min_length": 0,
64
+ "mlp_bias": false,
65
+ "model_type": "llama",
66
+ "no_repeat_ngram_size": 0,
67
+ "num_attention_heads": 24,
68
+ "num_beam_groups": 1,
69
+ "num_beams": 1,
70
+ "num_hidden_layers": 32,
71
+ "num_key_value_heads": 8,
72
+ "num_return_sequences": 1,
73
+ "output_attentions": false,
74
+ "output_hidden_states": false,
75
+ "output_scores": false,
76
+ "pad_token_id": 100257,
77
+ "prefix": null,
78
+ "pretraining_tp": 1,
79
+ "problem_type": null,
80
+ "pruned_heads": {},
81
+ "remove_invalid_values": false,
82
+ "repetition_penalty": 1.0,
83
+ "resid_pdrop": 0.2,
84
+ "return_dict": true,
85
+ "return_dict_in_generate": false,
86
+ "rms_norm_eps": 1e-05,
87
+ "rope_scaling": null,
88
+ "rope_theta": 100000000,
89
+ "sep_token_id": null,
90
+ "suppress_tokens": null,
91
+ "task_specific_params": null,
92
+ "temperature": 1.0,
93
+ "tf_legacy_loss": false,
94
+ "tie_encoder_decoder": false,
95
+ "tie_word_embeddings": true,
96
+ "tokenizer_class": null,
97
+ "top_k": 50,
98
+ "top_p": 1.0,
99
+ "torch_dtype": "bfloat16",
100
+ "torchscript": false,
101
+ "transformers_version": "4.52.4",
102
+ "typical_p": 1.0,
103
+ "use_bfloat16": false,
104
+ "use_cache": true,
105
+ "vocab_size": 110592
106
+ },
107
+ "max_image_cnt": 12,
108
+ "max_num_grids": 9,
109
+ "model_type": "hyperclovax_vlm",
110
+ "num_queries_vis_abstractor_image": 81,
111
+ "num_queries_vis_abstractor_video_slow": 81,
112
+ "num_queries_vis_abstractor_video_fast": 9,
113
+ "first_last_frames_slow": false,
114
+ "proj_pos_emb": true,
115
+ "proj_prenorm": false,
116
+ "q_former_model_name_or_path": null,
117
+ "torch_dtype": "bfloat16",
118
+ "transformers_version": "4.52.4",
119
+ "unpad": true,
120
+ "use_1x1_grid": true,
121
+ "use_nth_layer": -2,
122
+ "vision_config": {
123
+ "_attn_implementation_autoset": true,
124
+ "_name_or_path": "",
125
+ "add_cross_attention": false,
126
+ "architectures": [
127
+ "SiglipVisionModel"
128
+ ],
129
+ "attention_dropout": 0.0,
130
+ "auto_map": {},
131
+ "bad_words_ids": null,
132
+ "begin_suppress_tokens": null,
133
+ "bos_token_id": null,
134
+ "chunk_size_feed_forward": 0,
135
+ "cross_attention_hidden_size": null,
136
+ "decoder_start_token_id": null,
137
+ "diversity_penalty": 0.0,
138
+ "do_sample": false,
139
+ "early_stopping": false,
140
+ "encoder_no_repeat_ngram_size": 0,
141
+ "eos_token_id": null,
142
+ "exponential_decay_length_penalty": null,
143
+ "finetuning_task": null,
144
+ "forced_bos_token_id": null,
145
+ "forced_eos_token_id": null,
146
+ "hidden_act": "gelu_pytorch_tanh",
147
+ "hidden_size": 1152,
148
+ "id2label": {
149
+ "0": "LABEL_0",
150
+ "1": "LABEL_1"
151
+ },
152
+ "image_size": 378,
153
+ "initializer_factor": 1.0,
154
+ "intermediate_size": 4304,
155
+ "is_decoder": false,
156
+ "is_encoder_decoder": false,
157
+ "label2id": {
158
+ "LABEL_0": 0,
159
+ "LABEL_1": 1
160
+ },
161
+ "layer_norm_eps": 1e-06,
162
+ "length_penalty": 1.0,
163
+ "max_length": 20,
164
+ "max_num_grids": 9,
165
+ "min_length": 0,
166
+ "model_type": "siglip_vision_model",
167
+ "no_repeat_ngram_size": 0,
168
+ "num_attention_heads": 16,
169
+ "num_beam_groups": 1,
170
+ "num_beams": 1,
171
+ "num_channels": 3,
172
+ "num_hidden_layers": 27,
173
+ "num_return_sequences": 1,
174
+ "output_attentions": false,
175
+ "output_hidden_states": false,
176
+ "output_scores": false,
177
+ "pad_token_id": null,
178
+ "patch_size": 14,
179
+ "prefix": null,
180
+ "problem_type": null,
181
+ "pruned_heads": {},
182
+ "remove_invalid_values": false,
183
+ "repetition_penalty": 1.0,
184
+ "return_dict": true,
185
+ "return_dict_in_generate": false,
186
+ "sep_token_id": null,
187
+ "suppress_tokens": null,
188
+ "task_specific_params": null,
189
+ "temperature": 1.0,
190
+ "tf_legacy_loss": false,
191
+ "tie_encoder_decoder": false,
192
+ "tie_word_embeddings": true,
193
+ "tokenizer_class": null,
194
+ "top_k": 50,
195
+ "top_p": 1.0,
196
+ "torch_dtype": "bfloat16",
197
+ "torchscript": false,
198
+ "transformers_version": "4.52.4",
199
+ "typical_p": 1.0,
200
+ "use_bfloat16": true
201
+ }
202
+ }
configuration_hyperclovax.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import AutoConfig
2
+ from transformers.configuration_utils import PretrainedConfig
3
+ from transformers.utils import logging
4
+
5
+ logger = logging.get_logger(__name__)
6
+
7
+
8
+ class HCXVisionConfig(PretrainedConfig):
9
+ model_type = "hyperclovax_vlm"
10
+ keys_to_ignore_at_inference = ["past_key_values"]
11
+
12
+ # The `gpt2` class has a different name, so it needs to be updated accordingly.
13
+ text_config_attribute_map = {
14
+ "n_embd": "hidden_size",
15
+ "n_positions": "max_position_embeddings",
16
+ "n_head": "num_attention_heads",
17
+ "n_layer": "num_hidden_layers",
18
+ }
19
+
20
+ def __init__(
21
+ self,
22
+ text_config=None,
23
+ vision_config=None,
24
+ use_nth_layer=-2,
25
+ img_start_id=100009, # <|dummy3|>
26
+ decoder_max_length=4096,
27
+ anyres=False,
28
+ unpad=False,
29
+ max_num_grids=-1,
30
+ num_queries_vis_abstractor=-1,
31
+ ignore_index=-100,
32
+ proj_pos_emb=True,
33
+ proj_prenorm=False,
34
+ use_1x1_grid=False,
35
+ **kwargs,
36
+ ):
37
+ for key, val in self.text_config_attribute_map.items():
38
+ if text_config is not None and key in text_config:
39
+ text_config[val] = text_config.pop(key)
40
+
41
+ if text_config is not None:
42
+ _text_config = AutoConfig.for_model(text_config["model_type"])
43
+ self.text_config = _text_config.from_dict(text_config)
44
+
45
+ # In DeepSpeed ZeRO-3, the memory size is automatically determined based on the `hidden_size` specified in the config.
46
+ self.hidden_size = text_config["hidden_size"] if "hidden_size" in text_config else text_config["n_embd"]
47
+ if vision_config is not None:
48
+ _vision_config = AutoConfig.for_model(vision_config["model_type"])
49
+ self.vision_config = _vision_config.from_dict(vision_config)
50
+
51
+ # add VLM configs
52
+ self.use_nth_layer = use_nth_layer
53
+ self.decoder_max_length = decoder_max_length
54
+ self.anyres = anyres
55
+ self.unpad = unpad
56
+ self.max_num_grids = max_num_grids
57
+ self.num_queries_vis_abstractor = num_queries_vis_abstractor
58
+ self.img_start_id = img_start_id
59
+ self.ignore_index = ignore_index
60
+ self.proj_pos_emb = proj_pos_emb
61
+ self.proj_prenorm = proj_prenorm
62
+ self.use_1x1_grid = use_1x1_grid
63
+ super().__init__(**kwargs)
64
+
65
+ def get_text_config(self, decoder=False):
66
+ return self.text_config
image_processing_hyperclovax.py ADDED
@@ -0,0 +1,789 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import math
3
+ import os
4
+ from typing import Dict, List, Optional, Union
5
+
6
+ import numpy as np
7
+ import torch
8
+ from PIL import Image
9
+ from transformers.feature_extraction_utils import BatchFeature
10
+ from transformers.image_processing_utils import (
11
+ BaseImageProcessor,
12
+ get_size_dict,
13
+ )
14
+ from transformers.image_transforms import (
15
+ convert_to_rgb,
16
+ get_resize_output_image_size,
17
+ resize,
18
+ to_channel_dimension_format,
19
+ )
20
+ from transformers.image_utils import (
21
+ OPENAI_CLIP_MEAN,
22
+ OPENAI_CLIP_STD,
23
+ ChannelDimension,
24
+ ImageInput,
25
+ PILImageResampling,
26
+ get_image_size,
27
+ infer_channel_dimension_format,
28
+ is_scaled_image,
29
+ make_list_of_images,
30
+ to_numpy_array,
31
+ valid_images,
32
+ )
33
+ from transformers.utils import TensorType, logging
34
+
35
+ logger = logging.get_logger(__name__)
36
+
37
+
38
+ class HCXImageProcessor(BaseImageProcessor):
39
+ r"""
40
+ Constructs a VLM image processor. Based on [`CLIPImageProcessor`] with incorporation of additional techniques for processing high resolution images.
41
+ Args:
42
+ anyres: (bool) anyres 기능을 사용할지 안할지
43
+ unpad: (bool) anyres 사용시, unpad 기능 (순수 pad 영역에 해당하는 visual tokens 은 LLM input 에서 제거) 을 사용할지 안할지
44
+ num_queries_vis_abstractor: (int) 각 grid 에 대해서 resampler 를 사용하는 경우, visual query 수
45
+ possible_resolutions: (List) anyres 기능 사용시, 가능한 resolution 조합, 예: [[336, 336], [336, 672], [672, 336]]
46
+ patch_size: (int) ViT patch size
47
+ pad_to_square: (bool) 정사각형으로 padding 을 수행할지, 안할지를 결정. False 이면 정사각형이 아니기 때문에 center crop 을 거쳐 ViT 의 입력으로 들어감
48
+ """
49
+
50
+ model_input_names = ["pixel_values"]
51
+
52
+ def __init__(
53
+ self,
54
+ do_resize: bool = True,
55
+ size: Dict[str, int] = None,
56
+ anyres: bool = False,
57
+ unpad: bool = False,
58
+ num_queries_vis_abstractor_image: int = 81,
59
+ num_queries_vis_abstractor_video_slow: int = 81,
60
+ num_queries_vis_abstractor_video_fast: int = 9,
61
+ first_last_frames_slow_video: bool = False,
62
+ possible_resolutions: List = [],
63
+ patch_size: int = 14,
64
+ pad_to_square: bool = True,
65
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
66
+ do_center_crop: bool = True,
67
+ crop_size: Dict[str, int] = None,
68
+ do_rescale: bool = True,
69
+ rescale_factor: Union[int, float] = 1 / 255,
70
+ do_normalize: bool = True,
71
+ image_mean: Optional[Union[float, List[float]]] = None,
72
+ image_std: Optional[Union[float, List[float]]] = None,
73
+ do_convert_rgb: bool = True,
74
+ **kwargs,
75
+ ) -> None:
76
+ super().__init__(**kwargs)
77
+ size = size if size is not None else {"shortest_edge": 336}
78
+ size = get_size_dict(size, default_to_square=False)
79
+ crop_size = crop_size if crop_size is not None else {"height": 336, "width": 336}
80
+ crop_size = get_size_dict(crop_size, default_to_square=True, param_name="crop_size")
81
+
82
+ self.do_resize = do_resize
83
+ self.size = size
84
+ self.anyres = anyres
85
+ self.unpad = unpad
86
+ self.num_queries_vis_abstractor_image = num_queries_vis_abstractor_image
87
+ self.num_queries_vis_abstractor_video_slow = num_queries_vis_abstractor_video_slow
88
+ self.num_queries_vis_abstractor_video_fast = num_queries_vis_abstractor_video_fast
89
+ self.first_last_frames_slow_video = first_last_frames_slow_video
90
+ self.possible_resolutions = [_resolution for _resolution in possible_resolutions]
91
+ self.patch_size = patch_size
92
+ self.pad_to_square = pad_to_square
93
+ self.resample = resample
94
+ self.do_center_crop = do_center_crop
95
+ self.crop_size = crop_size
96
+ self.do_rescale = do_rescale
97
+ self.rescale_factor = rescale_factor
98
+ self.do_normalize = do_normalize
99
+ self.image_mean = image_mean if image_mean is not None else OPENAI_CLIP_MEAN
100
+ self.image_std = image_std if image_std is not None else OPENAI_CLIP_STD
101
+ self.do_convert_rgb = do_convert_rgb
102
+
103
+ def resize(
104
+ self,
105
+ image: np.ndarray,
106
+ size: Dict[str, int],
107
+ resample: PILImageResampling = PILImageResampling.BICUBIC,
108
+ data_format: Optional[Union[str, ChannelDimension]] = None,
109
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
110
+ **kwargs,
111
+ ) -> np.ndarray:
112
+ default_to_square = True
113
+ if "shortest_edge" in size:
114
+ size = size["shortest_edge"]
115
+ default_to_square = False
116
+ elif "height" in size and "width" in size:
117
+ size = (size["height"], size["width"])
118
+ else:
119
+ raise ValueError("Size must contain either 'shortest_edge' or 'height' and 'width'.")
120
+
121
+ output_size = get_resize_output_image_size(
122
+ image,
123
+ size=size,
124
+ default_to_square=default_to_square,
125
+ input_data_format=input_data_format,
126
+ )
127
+
128
+ return resize(
129
+ image,
130
+ size=output_size,
131
+ resample=resample,
132
+ data_format=data_format,
133
+ input_data_format=input_data_format,
134
+ **kwargs,
135
+ )
136
+
137
+ def _preprocess(
138
+ self,
139
+ images: ImageInput,
140
+ do_resize: bool = None,
141
+ size: Dict[str, int] = None,
142
+ resample: PILImageResampling = None,
143
+ do_center_crop: bool = None,
144
+ crop_size: int = None,
145
+ do_rescale: bool = None,
146
+ rescale_factor: float = None,
147
+ do_normalize: bool = None,
148
+ image_mean: Optional[Union[float, List[float]]] = None,
149
+ image_std: Optional[Union[float, List[float]]] = None,
150
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
151
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
152
+ ) -> Image.Image:
153
+ images = make_list_of_images(images)
154
+
155
+ if do_resize:
156
+ images = [
157
+ self.resize(image=image, size=size, resample=resample, input_data_format=input_data_format)
158
+ for image in images
159
+ ]
160
+
161
+ if do_center_crop:
162
+ images = [
163
+ self.center_crop(image=image, size=crop_size, input_data_format=input_data_format) for image in images
164
+ ]
165
+
166
+ if do_rescale:
167
+ images = [
168
+ self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format) for image in images
169
+ ]
170
+
171
+ if do_normalize:
172
+ images = [
173
+ self.normalize(image=image, mean=image_mean, std=image_std, input_data_format=input_data_format)
174
+ for image in images
175
+ ]
176
+
177
+ images = [
178
+ to_channel_dimension_format(image, data_format, input_channel_dim=input_data_format) for image in images
179
+ ]
180
+
181
+ return images
182
+
183
+ def _resize_for_local_grids(
184
+ self, image: np.array, target_resolution: tuple, resample, input_data_format: ChannelDimension
185
+ ) -> np.array:
186
+ new_height, new_width = _get_local_grids_output_size(image, target_resolution, input_data_format)
187
+
188
+ # Resize the image
189
+ resized_image = resize(image, (new_height, new_width), resample=resample, input_data_format=input_data_format)
190
+
191
+ return resized_image
192
+
193
+ def _pad_for_patching(
194
+ self, image: np.array, target_resolution: tuple, input_data_format: ChannelDimension
195
+ ) -> np.array:
196
+ """
197
+ Pad an image to a target resolution while maintaining aspect ratio.
198
+ """
199
+ target_height, target_width = target_resolution
200
+
201
+ background_color = tuple(int(x * 255) for x in self.image_mean)
202
+ padded_image = pad(
203
+ image,
204
+ target_size=(target_height, target_width),
205
+ background_color=background_color,
206
+ input_data_format=input_data_format,
207
+ )
208
+
209
+ return padded_image
210
+
211
+ def get_image_grids(
212
+ self,
213
+ image: np.array,
214
+ possible_resolutions,
215
+ grid_size: int,
216
+ resample: PILImageResampling,
217
+ data_format: ChannelDimension,
218
+ input_data_format: ChannelDimension,
219
+ ) -> List[np.array]:
220
+ if not isinstance(possible_resolutions, list):
221
+ raise ValueError("possible_resolutions must be a list of possible resolutions.")
222
+
223
+ image_size = get_image_size(image, channel_dim=input_data_format)
224
+ best_resolution = select_best_resolution(image_size, possible_resolutions)
225
+ resized_image = self._resize_for_local_grids(
226
+ image, best_resolution, resample=resample, input_data_format=input_data_format
227
+ )
228
+ padded_image = self._pad_for_patching(resized_image, best_resolution, input_data_format=input_data_format)
229
+ local_grids = divide_to_grids(padded_image, grid_size=grid_size, input_data_format=input_data_format)
230
+
231
+ # make sure that all patches are in the input data format
232
+ local_grids = [
233
+ to_channel_dimension_format(grid, channel_dim=data_format, input_channel_dim=input_data_format)
234
+ for grid in local_grids
235
+ ]
236
+
237
+ return local_grids
238
+
239
+ def preprocess(
240
+ self,
241
+ images: ImageInput,
242
+ do_resize: bool = None,
243
+ size: Dict[str, int] = None,
244
+ anyres: bool = None,
245
+ unpad: bool = None,
246
+ is_video: bool = False,
247
+ num_queries_vis_abstractor_image: int = None,
248
+ num_queries_vis_abstractor_video_slow: int = None,
249
+ num_queries_vis_abstractor_video_fast: int = None,
250
+ first_last_frames_slow_video: bool = None,
251
+ possible_resolutions: List = None,
252
+ patch_size: int = None,
253
+ pad_to_square: bool = None,
254
+ resample: PILImageResampling = None,
255
+ do_center_crop: bool = None,
256
+ crop_size: int = None,
257
+ do_rescale: bool = None,
258
+ rescale_factor: float = None,
259
+ do_normalize: bool = None,
260
+ image_mean: Optional[Union[float, List[float]]] = None,
261
+ image_std: Optional[Union[float, List[float]]] = None,
262
+ do_convert_rgb: bool = None,
263
+ return_tensors: Optional[Union[str, TensorType]] = None,
264
+ data_format: Optional[ChannelDimension] = ChannelDimension.FIRST,
265
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
266
+ return_dummy_image: bool = False,
267
+ first_last_frames_slow: bool = False,
268
+ is_first_or_last_frames: bool = False,
269
+ **kwargs,
270
+ ):
271
+ """
272
+ HCXVisionImageProcessor 로 image tensor, original image size (width, height), visual tokens
273
+ :return pixel_values: List of 4D tensor 로 image tensor
274
+ :return image_sizes: List of Dict 로 image width, height [{"width": image 1 의 width, "height": image 1 의 height}, {"width": image 2 의 width, "height": image 2 의 height}, ...]
275
+ :return vision_query_lengths: List of int 로 각 image 가 LLM 입력으로 전달될때 변환되는 visual token 수
276
+ """
277
+
278
+ do_resize = do_resize if do_resize is not None else self.do_resize
279
+ size = size if size is not None else self.size
280
+ size = get_size_dict(size, param_name="size", default_to_square=False)
281
+ anyres = anyres if anyres is not None else self.anyres
282
+ unpad = unpad if unpad is not None else self.unpad
283
+ num_queries_vis_abstractor_image = (
284
+ num_queries_vis_abstractor_image
285
+ if num_queries_vis_abstractor_image is not None
286
+ else self.num_queries_vis_abstractor_image
287
+ )
288
+ num_queries_vis_abstractor_video_slow = (
289
+ num_queries_vis_abstractor_video_slow
290
+ if num_queries_vis_abstractor_video_slow is not None
291
+ else self.num_queries_vis_abstractor_video_slow
292
+ )
293
+ num_queries_vis_abstractor_video_fast = (
294
+ num_queries_vis_abstractor_video_fast
295
+ if num_queries_vis_abstractor_video_fast is not None
296
+ else self.num_queries_vis_abstractor_video_fast
297
+ )
298
+ first_last_frames_slow_video = (
299
+ first_last_frames_slow_video
300
+ if first_last_frames_slow_video is not None
301
+ else self.first_last_frames_slow_video
302
+ )
303
+ possible_resolutions = possible_resolutions if possible_resolutions is not None else self.possible_resolutions
304
+ patch_size = patch_size if patch_size is not None else self.patch_size
305
+ pad_to_square = pad_to_square if pad_to_square is not None else self.pad_to_square
306
+ resample = resample if resample is not None else self.resample
307
+ do_center_crop = do_center_crop if do_center_crop is not None else self.do_center_crop
308
+ crop_size = crop_size if crop_size is not None else self.crop_size
309
+ crop_size = get_size_dict(crop_size, param_name="crop_size", default_to_square=True)
310
+ do_rescale = do_rescale if do_rescale is not None else self.do_rescale
311
+ rescale_factor = rescale_factor if rescale_factor is not None else self.rescale_factor
312
+ do_normalize = do_normalize if do_normalize is not None else self.do_normalize
313
+ image_mean = image_mean if image_mean is not None else self.image_mean
314
+ image_std = image_std if image_std is not None else self.image_std
315
+ do_convert_rgb = do_convert_rgb if do_convert_rgb is not None else self.do_convert_rgb
316
+
317
+ if is_video:
318
+ num_queries_vis_abstractor = num_queries_vis_abstractor_video_fast
319
+ num_queries_vis_abstractor_slow = num_queries_vis_abstractor_video_slow
320
+ unpad = False
321
+ else:
322
+ num_queries_vis_abstractor = num_queries_vis_abstractor_image
323
+ num_queries_vis_abstractor_slow = 0
324
+
325
+ if return_dummy_image:
326
+ images = Image.new("RGB", (224, 224), (0, 0, 0))
327
+
328
+ images = make_list_of_images(images)
329
+
330
+ if not valid_images(images):
331
+ raise ValueError(
332
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
333
+ "torch.Tensor, tf.Tensor or jax.ndarray."
334
+ )
335
+
336
+ if do_convert_rgb:
337
+ images = [convert_to_rgb(image) for image in images]
338
+
339
+ # All transformations expect numpy arrays.
340
+ images = [to_numpy_array(image) for image in images]
341
+
342
+ if is_scaled_image(images[0]) and do_rescale:
343
+ logger.warning_once(
344
+ "It looks like you are trying to rescale already rescaled images. If the input"
345
+ " images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again."
346
+ )
347
+
348
+ if input_data_format is None:
349
+ # We assume that all images have the same channel dimension format.
350
+ input_data_format = infer_channel_dimension_format(images[0])
351
+
352
+ new_images = []
353
+ image_sizes = [get_image_size(image, channel_dim=input_data_format) for image in images]
354
+ vision_query_lengths = []
355
+
356
+ assert crop_size["height"] == crop_size["width"]
357
+
358
+ # global image 의 padding 연산은, image original width, height 가 클 때 bottleneck 이 될 수 있음
359
+ # 장축의 길이를 size["shortest_edge"] 로 resize 를 먼저 한 뒤에, padding
360
+ if anyres:
361
+ anyres_global_images = copy.deepcopy(images)
362
+ if pad_to_square:
363
+ background_color = tuple(int(x * 255) for x in self.image_mean)
364
+ anyres_global_images = [
365
+ resize_longside(copy.deepcopy(image), size["shortest_edge"], resample, input_data_format)
366
+ for image in anyres_global_images
367
+ ]
368
+ anyres_global_images = [
369
+ expand2square(image, background_color=background_color, input_data_format=input_data_format)[0]
370
+ for image in anyres_global_images
371
+ ]
372
+ else:
373
+ anyres_global_images = [
374
+ self.resize(
375
+ image=image,
376
+ size={"height": size["shortest_edge"], "width": size["shortest_edge"]},
377
+ resample=resample,
378
+ input_data_format=input_data_format,
379
+ )
380
+ for image in anyres_global_images
381
+ ]
382
+ else:
383
+ anyres_global_images = [None for _ in range(len(images))]
384
+ if pad_to_square:
385
+ background_color = tuple(int(x * 255) for x in self.image_mean)
386
+ images = [
387
+ resize_longside(image, size["shortest_edge"], resample, input_data_format) for image in images
388
+ ]
389
+ images = [
390
+ expand2square(image, background_color=background_color, input_data_format=input_data_format)[0]
391
+ for image in images
392
+ ]
393
+
394
+ for image, anyres_global_image, image_size in zip(images, anyres_global_images, image_sizes):
395
+ if anyres:
396
+ # convert image into a list of grids
397
+ # we intentially use the same data format as the input data format
398
+ image_grids = self.get_image_grids(
399
+ image,
400
+ possible_resolutions,
401
+ grid_size=crop_size["height"],
402
+ resample=resample,
403
+ data_format=input_data_format,
404
+ input_data_format=input_data_format,
405
+ )
406
+ # video 에 대해서는 global image (thumbnail) 를 사용하지 않음
407
+ if not is_video:
408
+ image_grids = [anyres_global_image] + image_grids
409
+ else:
410
+ image_grids = [image]
411
+
412
+ pixel_values = self._preprocess(
413
+ image_grids,
414
+ do_resize=do_resize,
415
+ size=size,
416
+ resample=resample,
417
+ do_center_crop=do_center_crop,
418
+ crop_size=crop_size,
419
+ do_rescale=do_rescale,
420
+ rescale_factor=rescale_factor,
421
+ do_normalize=do_normalize,
422
+ image_mean=image_mean,
423
+ image_std=image_std,
424
+ data_format=data_format,
425
+ input_data_format=input_data_format,
426
+ )
427
+
428
+ pixel_values = np.array(pixel_values)
429
+ new_images.append(pixel_values)
430
+
431
+ vision_query_length = determine_anyres_num_vision_patches(
432
+ image_size=image_size,
433
+ grid_size=crop_size["height"],
434
+ patch_size=patch_size,
435
+ possible_resolutions=possible_resolutions,
436
+ anyres=anyres,
437
+ unpad=unpad,
438
+ num_queries_vis_abstractor=num_queries_vis_abstractor,
439
+ num_queries_vis_abstractor_slow=num_queries_vis_abstractor_slow,
440
+ is_video=is_video,
441
+ first_last_frames_slow=first_last_frames_slow,
442
+ is_first_or_last_frames=is_first_or_last_frames,
443
+ )
444
+
445
+ vision_query_lengths.append(vision_query_length)
446
+
447
+ if return_dummy_image:
448
+ vision_query_lengths = []
449
+
450
+ data = {
451
+ "pixel_values": [torch.tensor(new_image) for new_image in new_images],
452
+ "image_sizes": [{"width": image_size[1], "height": image_size[0]} for image_size in image_sizes],
453
+ "vision_query_lengths": vision_query_lengths,
454
+ }
455
+
456
+ return BatchFeature(data=data, tensor_type=return_tensors)
457
+
458
+ def save_pretrained(
459
+ self,
460
+ save_directory: Union[str, os.PathLike],
461
+ *args,
462
+ **kwargs,
463
+ ):
464
+ self.register_for_auto_class()
465
+ super().save_pretrained(save_directory, *args, **kwargs)
466
+
467
+
468
+ def determine_anyres_num_vision_patches(
469
+ image_size,
470
+ grid_size,
471
+ patch_size,
472
+ possible_resolutions,
473
+ anyres=False,
474
+ unpad=True,
475
+ num_queries_vis_abstractor=0,
476
+ num_queries_vis_abstractor_slow=0,
477
+ is_video=False,
478
+ first_last_frames_slow=False, # sample-wise option
479
+ is_first_or_last_frames=False, # grid-wise option
480
+ ):
481
+ """
482
+ Computes the number of visual tokens (patches) based on image resolution, grid configuration, and patch size.
483
+
484
+ This function supports both fixed-size and any-resolution settings, as well as video-specific configurations
485
+ such as handling slow frames and frame position flags.
486
+
487
+ Args:
488
+ num_grids (int): Number of grids per image (e.g., 1 for 1x1, 4 for 2x2, etc.).
489
+ image_size (tuple): The original image size as (height, width).
490
+ grid_size (int): Size of each grid in pixels (e.g., 336).
491
+ patch_size (int): Size of each vision patch (e.g., 14 for ViT models).
492
+ possible_resolutions (list): List of possible resolution tuples [(h1, w1), (h2, w2), ...].
493
+ anyres (bool, optional): Whether to use any-resolution mode. Defaults to False.
494
+ unpad (bool, optional): Whether to unpad the image before computing patches. Defaults to True.
495
+ num_queries_vis_abstractor (int, optional): Number of query tokens for vision abstractor (fast path).
496
+ num_queries_vis_abstractor_slow (int, optional): Number of query tokens for vision abstractor (slow path).
497
+ is_video (bool, optional): Whether the input is a video. Defaults to False.
498
+ first_last_frames_slow (bool, optional): Whether to treat first/last video frames as "slow". Defaults to False.
499
+ is_first_or_last_frames (bool, optional): Whether current grid corresponds to first/last frame. Defaults to False.
500
+
501
+ Returns:
502
+ int: Total number of visual tokens (patches) after processing.
503
+ """
504
+
505
+ if not anyres:
506
+ return num_queries_vis_abstractor if num_queries_vis_abstractor > 0 else (grid_size // patch_size) ** 2
507
+
508
+ if num_queries_vis_abstractor > 0:
509
+ num_patch_per_grid = int(num_queries_vis_abstractor**0.5)
510
+ else:
511
+ num_patch_per_grid = grid_size // patch_size
512
+
513
+ num_global_per_grid = num_patch_per_grid
514
+
515
+ # In anyres mode, a global image is included, so there are always at least 2 grids.
516
+ # However, for video inputs, there is no global image, so it's possible to have only 1 grid.
517
+ # Therefore, the assertion below is commented out:
518
+ # assert num_grids > 1
519
+
520
+ # Compute the number of vision patches.
521
+ height, width = select_best_resolution(image_size, possible_resolutions)
522
+
523
+ num_patch_height = (height // grid_size) * num_patch_per_grid
524
+ num_patch_width = (width // grid_size) * num_patch_per_grid
525
+
526
+ # local images
527
+ if unpad:
528
+ original_height, original_width = image_size
529
+
530
+ original_aspect_ratio = original_width / original_height
531
+ current_aspect_ratio = num_patch_width / num_patch_height
532
+
533
+ if original_aspect_ratio > current_aspect_ratio:
534
+ scale_factor = num_patch_width / original_width
535
+ new_height = int(original_height * scale_factor)
536
+ padding = (num_patch_height - new_height) // 2
537
+ num_patch_height = num_patch_height - padding * 2
538
+ else:
539
+ scale_factor = num_patch_height / original_height
540
+ new_width = int(original_width * scale_factor)
541
+ padding = (num_patch_width - new_width) // 2
542
+ num_patch_width = num_patch_width - padding * 2
543
+
544
+ num_patches = num_patch_width * num_patch_height + num_patch_height
545
+ else:
546
+ num_patches = num_patch_width * num_patch_height
547
+
548
+ # In the "slow" strategy, when applying to first and last frames only, it is applied exclusively to those two frames.
549
+ if num_queries_vis_abstractor_slow > 0:
550
+ if first_last_frames_slow:
551
+ if is_first_or_last_frames:
552
+ num_patches += num_queries_vis_abstractor_slow - num_queries_vis_abstractor
553
+ else:
554
+ num_patches += num_queries_vis_abstractor_slow - num_queries_vis_abstractor
555
+ # The slowfast feature is only applicable when unpad is set to False.
556
+ assert unpad is False
557
+
558
+ # Global image is not included for video inputs.
559
+ if not is_video:
560
+ num_patches += num_global_per_grid**2
561
+
562
+ return num_patches
563
+
564
+
565
+ def divide_to_grids(image: np.array, grid_size: int, input_data_format=None) -> List[np.array]:
566
+ """
567
+ Divides a local image into grids of size (grid_size x grid_size).
568
+
569
+ Args:
570
+ image (np.array): Input image as a NumPy array.
571
+ grid_size (int): The size (in pixels) of each square grid.
572
+ input_data_format (optional): Optional format specifier (e.g., "channels_first" or "channels_last").
573
+
574
+ Returns:
575
+ List[np.array]: A list of image patches, each of size (grid_size x grid_size).
576
+ """
577
+ grids = []
578
+ height, width = get_image_size(image, channel_dim=input_data_format)
579
+ for i in range(0, height, grid_size):
580
+ for j in range(0, width, grid_size):
581
+ if input_data_format == ChannelDimension.LAST:
582
+ grid = image[i : i + grid_size, j : j + grid_size]
583
+ else:
584
+ grid = image[:, i : i + grid_size, j : j + grid_size]
585
+ grids.append(grid)
586
+
587
+ return grids
588
+
589
+
590
+ def pad(
591
+ image: np.array,
592
+ target_size: tuple,
593
+ background_color=(127, 127, 127),
594
+ input_data_format=None,
595
+ ) -> np.array:
596
+ """
597
+ Pads the input image on the sides (top/bottom and left/right) to match the target height and width.
598
+
599
+ Args:
600
+ image (np.array): Input image as a NumPy array.
601
+ target_size (tuple): Target size as (target_height, target_width).
602
+ background_color (tuple, optional): RGB color value used for padding. Defaults to (127, 127, 127).
603
+ input_data_format (optional): Optional format specifier (e.g., "channels_first" or "channels_last").
604
+
605
+ Returns:
606
+ np.array: The padded image with the specified target size.
607
+ """
608
+ target_height, target_width = target_size
609
+ height, width = get_image_size(image, channel_dim=input_data_format)
610
+
611
+ # result = np.ones((target_height, target_width, image.shape[2]), dtype=image.dtype) * background_color
612
+ result = np.empty((target_height, target_width, image.shape[2]), dtype=image.dtype)
613
+ for i in range(image.shape[2]):
614
+ result[..., i].fill(background_color[i])
615
+
616
+ paste_x = (target_width - width) // 2
617
+ paste_y = (target_height - height) // 2
618
+
619
+ result[paste_y : paste_y + height, paste_x : paste_x + width, :] = image
620
+
621
+ return result
622
+
623
+
624
+ def expand2square(
625
+ image: np.array,
626
+ bboxes_dict=None,
627
+ background_color=(127, 127, 127),
628
+ input_data_format=None,
629
+ ) -> np.array:
630
+ """
631
+ Expands the input image to a square shape by placing it at the center of a new square canvas,
632
+ with padding added to the shorter side (either top/bottom or left/right).
633
+
634
+ The image is always centered on the new canvas, and padding is applied symmetrically.
635
+
636
+ Args:
637
+ image (np.array): Input image as a NumPy array.
638
+ bboxes_dict (dict, optional): A dictionary of bounding boxes, where each value is an NDArray of shape (N, 4, 2)
639
+ with box coordinates in the format [[xtl, ytl], [xtr, ytr], [xbr, ybr], [xbl, ybl]].
640
+ Supports multiple categories (e.g., "ocr", "html") simultaneously.
641
+ background_color (tuple, optional): RGB color to fill the padding area. Defaults to (127, 127, 127).
642
+ input_data_format (optional): Optional format specifier for image data (e.g., "channels_first" or "channels_last").
643
+
644
+ Returns:
645
+ np.array: A square-shaped image with the original image centered and padded as needed.
646
+
647
+ Example:
648
+ >>> _img = np.ones((80, 100), dtype=np.uint8) * 100
649
+ >>> _bboxes_dict = {"words": np.array([[[10, 10], [20, 10], [20, 20], [10, 20]],
650
+ ... [[30, 30], [40, 30], [40, 40], [30, 40]]])}
651
+ >>> _img, _bboxes_dict = expand2square(_img, _bboxes_dict, (255, 255, 255))
652
+ >>> _img.shape
653
+ (100, 100)
654
+ >>> guessed_ocr_bboxes = np.array([[[20, 10], [30, 10], [30, 20], [20, 20]],
655
+ ... [[40, 30], [50, 30], [50, 40], [40, 40]]])
656
+ >>> np.testing.assert_array_almost_equal(_bboxes_dict["words"], guessed_ocr_bboxes) is None
657
+ True
658
+ """
659
+ height, width = get_image_size(image, channel_dim=input_data_format)
660
+ if width == height:
661
+ return image, bboxes_dict
662
+ elif width > height:
663
+ # result = np.ones((width, width, image.shape[2]), dtype=image.dtype) * background_color
664
+ result = np.empty((width, width, image.shape[2]), dtype=image.dtype)
665
+ for i in range(image.shape[2]):
666
+ result[..., i].fill(background_color[i])
667
+
668
+ result[(width - height) // 2 : (width - height) // 2 + height, :] = image
669
+ if bboxes_dict is not None:
670
+ for key in bboxes_dict:
671
+ bboxes_dict[key][:, :, 1] += (width - height) // 2
672
+ return result, bboxes_dict
673
+ else:
674
+ # result = np.ones((height, height, image.shape[2]), dtype=image.dtype) * background_color
675
+ result = np.empty((height, height, image.shape[2]), dtype=image.dtype)
676
+ for i in range(image.shape[2]):
677
+ result[..., i].fill(background_color[i])
678
+
679
+ result[:, (height - width) // 2 : (height - width) // 2 + width] = image
680
+ if bboxes_dict is not None:
681
+ for key in bboxes_dict:
682
+ bboxes_dict[key][:, :, 0] += (height - width) // 2
683
+ return result, bboxes_dict
684
+
685
+
686
+ def resize_longside(
687
+ image: np.array,
688
+ size: int,
689
+ resample: PILImageResampling = PILImageResampling.BICUBIC, # type: ignore
690
+ data_format: Optional[Union[str, ChannelDimension]] = None,
691
+ input_data_format: Optional[Union[str, ChannelDimension]] = None,
692
+ ):
693
+ """
694
+ Resizes the image so that its longer side matches the specified size, maintaining the original aspect ratio.
695
+
696
+ Args:
697
+ image (np.array): Input image as a NumPy array.
698
+ size (int): Target size for the longer side of the image.
699
+ resample (PILImageResampling, optional): Resampling method to use during resizing. Defaults to BICUBIC.
700
+ data_format (str or ChannelDimension, optional): Output data format (e.g., "channels_first" or "channels_last").
701
+ input_data_format (str or ChannelDimension, optional): Input data format of the image.
702
+
703
+ Returns:
704
+ np.array: The resized image with its aspect ratio preserved.
705
+ """
706
+ height, width = get_image_size(image, channel_dim=input_data_format)
707
+
708
+ if width == height:
709
+ target_height, target_width = size, size
710
+ elif width > height:
711
+ target_width = size
712
+ target_height = math.ceil(height / width * size)
713
+ else:
714
+ target_width = math.ceil(width / height * size)
715
+ target_height = size
716
+
717
+ return resize(
718
+ image,
719
+ size=(target_height, target_width),
720
+ resample=resample,
721
+ data_format=data_format,
722
+ input_data_format=input_data_format,
723
+ )
724
+
725
+
726
+ def _get_local_grids_output_size(image: np.array, target_resolution: tuple, input_data_format=None):
727
+ """
728
+ Computes the number of local grids (patches) along the height and width when resizing an image
729
+ to the target resolution.
730
+
731
+ Args:
732
+ image (np.array): Input image as a NumPy array.
733
+ target_resolution (tuple): Target resolution in the format (target_height, target_width).
734
+ input_data_format (optional): Optional format specifier (e.g., "channels_first" or "channels_last").
735
+
736
+ Returns:
737
+ tuple: A tuple (grid_h, grid_w) representing the number of grids along the height and width.
738
+ """
739
+ original_height, original_width = get_image_size(image, channel_dim=input_data_format)
740
+ target_height, target_width = target_resolution
741
+
742
+ scale_w = target_width / original_width
743
+ scale_h = target_height / original_height
744
+
745
+ if scale_w < scale_h:
746
+ new_width = target_width
747
+ new_height = min(math.ceil(original_height * scale_w), target_height)
748
+ else:
749
+ new_height = target_height
750
+ new_width = min(math.ceil(original_width * scale_h), target_width)
751
+
752
+ return new_height, new_width
753
+
754
+
755
+ def select_best_resolution(original_size: tuple, possible_resolutions: list) -> tuple:
756
+ """
757
+ Selects the best-fit resolution from a list of possible resolutions based on the original image size.
758
+
759
+ This function, adapted from LLaVA-Next
760
+ (https://github.com/huggingface/transformers/blob/v4.40.2/src/transformers/models/llava_next/image_processing_llava_next.py),
761
+ evaluates each resolution by computing its effective and wasted area compared to the original size.
762
+ The optimal resolution is the one that maximizes the effective area while minimizing unused (wasted) space.
763
+
764
+ Args:
765
+ original_size (tuple): The original image size in the format (height, width).
766
+ possible_resolutions (list): A list of candidate resolutions in the format [(height1, width1), (height2, width2), ...].
767
+
768
+ Returns:
769
+ tuple: The best-fit resolution in the format (height, width).
770
+ """
771
+ original_height, original_width = original_size
772
+ best_fit = None
773
+ max_effective_resolution = 0
774
+ min_wasted_resolution = float("inf")
775
+
776
+ for height, width in possible_resolutions:
777
+ scale = min(width / original_width, height / original_height)
778
+ downscaled_width, downscaled_height = int(original_width * scale), int(original_height * scale)
779
+ effective_resolution = min(downscaled_width * downscaled_height, original_width * original_height)
780
+ wasted_resolution = (width * height) - effective_resolution
781
+
782
+ if effective_resolution > max_effective_resolution or (
783
+ effective_resolution == max_effective_resolution and wasted_resolution < min_wasted_resolution
784
+ ):
785
+ max_effective_resolution = effective_resolution
786
+ min_wasted_resolution = wasted_resolution
787
+ best_fit = (height, width)
788
+
789
+ return best_fit
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:340b93b87f93c98b62d2c96ef56e4656d9d68ec8a1cd178fe6812c925f8d8d88
3
+ size 4997245472
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f85a5d24cadbb3d235c670a88f9b0757ff50b226819ba0a3cece51a72a2891e4
3
+ size 4920253536
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4efe99ddd60fc020ad202df4ed089ab0a280c7b17d5376f011888fde8dd2c44
3
+ size 4967583384
model.safetensors.index.json ADDED
@@ -0,0 +1,829 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 14884974080
4
+ },
5
+ "weight_map": {
6
+ "image_newline": "model-00001-of-00003.safetensors",
7
+ "language_model.model.embed_tokens.weight": "model-00001-of-00003.safetensors",
8
+ "language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00003.safetensors",
9
+ "language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
10
+ "language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
11
+ "language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
12
+ "language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
13
+ "language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
14
+ "language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
15
+ "language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
16
+ "language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
17
+ "language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00003.safetensors",
18
+ "language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
19
+ "language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
20
+ "language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
21
+ "language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
22
+ "language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
23
+ "language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
24
+ "language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
25
+ "language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
26
+ "language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00003.safetensors",
27
+ "language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
28
+ "language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
29
+ "language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
30
+ "language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
31
+ "language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
32
+ "language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
33
+ "language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
34
+ "language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
35
+ "language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00003.safetensors",
36
+ "language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
37
+ "language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
38
+ "language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
39
+ "language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
40
+ "language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
41
+ "language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
42
+ "language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
43
+ "language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
44
+ "language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00003.safetensors",
45
+ "language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
46
+ "language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
47
+ "language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
48
+ "language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
49
+ "language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
50
+ "language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
51
+ "language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
52
+ "language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
53
+ "language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00003.safetensors",
54
+ "language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
55
+ "language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
56
+ "language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
57
+ "language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
58
+ "language_model.model.layers.13.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
59
+ "language_model.model.layers.13.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
60
+ "language_model.model.layers.13.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
61
+ "language_model.model.layers.13.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
62
+ "language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00003.safetensors",
63
+ "language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
64
+ "language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
65
+ "language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
66
+ "language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
67
+ "language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
68
+ "language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
69
+ "language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
70
+ "language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
71
+ "language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00003.safetensors",
72
+ "language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
73
+ "language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
74
+ "language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
75
+ "language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
76
+ "language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
77
+ "language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
78
+ "language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
79
+ "language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
80
+ "language_model.model.layers.16.input_layernorm.weight": "model-00002-of-00003.safetensors",
81
+ "language_model.model.layers.16.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
82
+ "language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
83
+ "language_model.model.layers.16.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
84
+ "language_model.model.layers.16.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
85
+ "language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
86
+ "language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
87
+ "language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
88
+ "language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
89
+ "language_model.model.layers.17.input_layernorm.weight": "model-00002-of-00003.safetensors",
90
+ "language_model.model.layers.17.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
91
+ "language_model.model.layers.17.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
92
+ "language_model.model.layers.17.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
93
+ "language_model.model.layers.17.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
94
+ "language_model.model.layers.17.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
95
+ "language_model.model.layers.17.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
96
+ "language_model.model.layers.17.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
97
+ "language_model.model.layers.17.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
98
+ "language_model.model.layers.18.input_layernorm.weight": "model-00003-of-00003.safetensors",
99
+ "language_model.model.layers.18.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
100
+ "language_model.model.layers.18.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
101
+ "language_model.model.layers.18.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
102
+ "language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
103
+ "language_model.model.layers.18.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
104
+ "language_model.model.layers.18.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
105
+ "language_model.model.layers.18.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
106
+ "language_model.model.layers.18.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
107
+ "language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00003.safetensors",
108
+ "language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
109
+ "language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
110
+ "language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
111
+ "language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
112
+ "language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
113
+ "language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
114
+ "language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
115
+ "language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
116
+ "language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00003.safetensors",
117
+ "language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
118
+ "language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
119
+ "language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
120
+ "language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
121
+ "language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
122
+ "language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
123
+ "language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
124
+ "language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
125
+ "language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00003.safetensors",
126
+ "language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
127
+ "language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
128
+ "language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
129
+ "language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
130
+ "language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
131
+ "language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
132
+ "language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
133
+ "language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
134
+ "language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00003.safetensors",
135
+ "language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
136
+ "language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
137
+ "language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
138
+ "language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
139
+ "language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
140
+ "language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
141
+ "language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
142
+ "language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
143
+ "language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00003.safetensors",
144
+ "language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
145
+ "language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
146
+ "language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
147
+ "language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
148
+ "language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
149
+ "language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
150
+ "language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
151
+ "language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
152
+ "language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00003.safetensors",
153
+ "language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
154
+ "language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
155
+ "language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
156
+ "language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
157
+ "language_model.model.layers.23.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
158
+ "language_model.model.layers.23.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
159
+ "language_model.model.layers.23.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
160
+ "language_model.model.layers.23.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
161
+ "language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00003.safetensors",
162
+ "language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
163
+ "language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
164
+ "language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
165
+ "language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
166
+ "language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
167
+ "language_model.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
168
+ "language_model.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
169
+ "language_model.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
170
+ "language_model.model.layers.25.input_layernorm.weight": "model-00003-of-00003.safetensors",
171
+ "language_model.model.layers.25.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
172
+ "language_model.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
173
+ "language_model.model.layers.25.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
174
+ "language_model.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
175
+ "language_model.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
176
+ "language_model.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
177
+ "language_model.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
178
+ "language_model.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
179
+ "language_model.model.layers.26.input_layernorm.weight": "model-00003-of-00003.safetensors",
180
+ "language_model.model.layers.26.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
181
+ "language_model.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
182
+ "language_model.model.layers.26.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
183
+ "language_model.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
184
+ "language_model.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
185
+ "language_model.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
186
+ "language_model.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
187
+ "language_model.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
188
+ "language_model.model.layers.27.input_layernorm.weight": "model-00003-of-00003.safetensors",
189
+ "language_model.model.layers.27.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
190
+ "language_model.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
191
+ "language_model.model.layers.27.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
192
+ "language_model.model.layers.27.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
193
+ "language_model.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
194
+ "language_model.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
195
+ "language_model.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
196
+ "language_model.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
197
+ "language_model.model.layers.28.input_layernorm.weight": "model-00003-of-00003.safetensors",
198
+ "language_model.model.layers.28.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
199
+ "language_model.model.layers.28.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
200
+ "language_model.model.layers.28.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
201
+ "language_model.model.layers.28.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
202
+ "language_model.model.layers.28.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
203
+ "language_model.model.layers.28.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
204
+ "language_model.model.layers.28.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
205
+ "language_model.model.layers.28.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
206
+ "language_model.model.layers.29.input_layernorm.weight": "model-00003-of-00003.safetensors",
207
+ "language_model.model.layers.29.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
208
+ "language_model.model.layers.29.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
209
+ "language_model.model.layers.29.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
210
+ "language_model.model.layers.29.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
211
+ "language_model.model.layers.29.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
212
+ "language_model.model.layers.29.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
213
+ "language_model.model.layers.29.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
214
+ "language_model.model.layers.29.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
215
+ "language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00003.safetensors",
216
+ "language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
217
+ "language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
218
+ "language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
219
+ "language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
220
+ "language_model.model.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
221
+ "language_model.model.layers.3.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
222
+ "language_model.model.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
223
+ "language_model.model.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
224
+ "language_model.model.layers.30.input_layernorm.weight": "model-00003-of-00003.safetensors",
225
+ "language_model.model.layers.30.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
226
+ "language_model.model.layers.30.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
227
+ "language_model.model.layers.30.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
228
+ "language_model.model.layers.30.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
229
+ "language_model.model.layers.30.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
230
+ "language_model.model.layers.30.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
231
+ "language_model.model.layers.30.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
232
+ "language_model.model.layers.30.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
233
+ "language_model.model.layers.31.input_layernorm.weight": "model-00003-of-00003.safetensors",
234
+ "language_model.model.layers.31.mlp.down_proj.weight": "model-00003-of-00003.safetensors",
235
+ "language_model.model.layers.31.mlp.gate_proj.weight": "model-00003-of-00003.safetensors",
236
+ "language_model.model.layers.31.mlp.up_proj.weight": "model-00003-of-00003.safetensors",
237
+ "language_model.model.layers.31.post_attention_layernorm.weight": "model-00003-of-00003.safetensors",
238
+ "language_model.model.layers.31.self_attn.k_proj.weight": "model-00003-of-00003.safetensors",
239
+ "language_model.model.layers.31.self_attn.o_proj.weight": "model-00003-of-00003.safetensors",
240
+ "language_model.model.layers.31.self_attn.q_proj.weight": "model-00003-of-00003.safetensors",
241
+ "language_model.model.layers.31.self_attn.v_proj.weight": "model-00003-of-00003.safetensors",
242
+ "language_model.model.layers.4.input_layernorm.weight": "model-00001-of-00003.safetensors",
243
+ "language_model.model.layers.4.mlp.down_proj.weight": "model-00001-of-00003.safetensors",
244
+ "language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00003.safetensors",
245
+ "language_model.model.layers.4.mlp.up_proj.weight": "model-00001-of-00003.safetensors",
246
+ "language_model.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00003.safetensors",
247
+ "language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
248
+ "language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
249
+ "language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
250
+ "language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
251
+ "language_model.model.layers.5.input_layernorm.weight": "model-00002-of-00003.safetensors",
252
+ "language_model.model.layers.5.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
253
+ "language_model.model.layers.5.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
254
+ "language_model.model.layers.5.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
255
+ "language_model.model.layers.5.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
256
+ "language_model.model.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
257
+ "language_model.model.layers.5.self_attn.o_proj.weight": "model-00001-of-00003.safetensors",
258
+ "language_model.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
259
+ "language_model.model.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
260
+ "language_model.model.layers.6.input_layernorm.weight": "model-00002-of-00003.safetensors",
261
+ "language_model.model.layers.6.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
262
+ "language_model.model.layers.6.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
263
+ "language_model.model.layers.6.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
264
+ "language_model.model.layers.6.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
265
+ "language_model.model.layers.6.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
266
+ "language_model.model.layers.6.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
267
+ "language_model.model.layers.6.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
268
+ "language_model.model.layers.6.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
269
+ "language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00003.safetensors",
270
+ "language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
271
+ "language_model.model.layers.7.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
272
+ "language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
273
+ "language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
274
+ "language_model.model.layers.7.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
275
+ "language_model.model.layers.7.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
276
+ "language_model.model.layers.7.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
277
+ "language_model.model.layers.7.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
278
+ "language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00003.safetensors",
279
+ "language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
280
+ "language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
281
+ "language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
282
+ "language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
283
+ "language_model.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
284
+ "language_model.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
285
+ "language_model.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
286
+ "language_model.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
287
+ "language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00003.safetensors",
288
+ "language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00003.safetensors",
289
+ "language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00003.safetensors",
290
+ "language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00003.safetensors",
291
+ "language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00003.safetensors",
292
+ "language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00003.safetensors",
293
+ "language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00003.safetensors",
294
+ "language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00003.safetensors",
295
+ "language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00003.safetensors",
296
+ "language_model.model.norm.weight": "model-00003-of-00003.safetensors",
297
+ "mm_projector.net.0.b1.conv1.bn.bias": "model-00003-of-00003.safetensors",
298
+ "mm_projector.net.0.b1.conv1.bn.weight": "model-00003-of-00003.safetensors",
299
+ "mm_projector.net.0.b1.conv1.conv.weight": "model-00003-of-00003.safetensors",
300
+ "mm_projector.net.0.b1.conv2.bn.bias": "model-00003-of-00003.safetensors",
301
+ "mm_projector.net.0.b1.conv2.bn.weight": "model-00003-of-00003.safetensors",
302
+ "mm_projector.net.0.b1.conv2.conv.weight": "model-00003-of-00003.safetensors",
303
+ "mm_projector.net.0.b1.conv3.bn.bias": "model-00003-of-00003.safetensors",
304
+ "mm_projector.net.0.b1.conv3.bn.weight": "model-00003-of-00003.safetensors",
305
+ "mm_projector.net.0.b1.conv3.conv.weight": "model-00003-of-00003.safetensors",
306
+ "mm_projector.net.0.b1.se.fc1.bias": "model-00003-of-00003.safetensors",
307
+ "mm_projector.net.0.b1.se.fc1.weight": "model-00003-of-00003.safetensors",
308
+ "mm_projector.net.0.b1.se.fc2.bias": "model-00003-of-00003.safetensors",
309
+ "mm_projector.net.0.b1.se.fc2.weight": "model-00003-of-00003.safetensors",
310
+ "mm_projector.net.0.b2.conv1.bn.bias": "model-00003-of-00003.safetensors",
311
+ "mm_projector.net.0.b2.conv1.bn.weight": "model-00003-of-00003.safetensors",
312
+ "mm_projector.net.0.b2.conv1.conv.weight": "model-00003-of-00003.safetensors",
313
+ "mm_projector.net.0.b2.conv2.bn.bias": "model-00003-of-00003.safetensors",
314
+ "mm_projector.net.0.b2.conv2.bn.weight": "model-00003-of-00003.safetensors",
315
+ "mm_projector.net.0.b2.conv2.conv.weight": "model-00003-of-00003.safetensors",
316
+ "mm_projector.net.0.b2.conv3.bn.bias": "model-00003-of-00003.safetensors",
317
+ "mm_projector.net.0.b2.conv3.bn.weight": "model-00003-of-00003.safetensors",
318
+ "mm_projector.net.0.b2.conv3.conv.weight": "model-00003-of-00003.safetensors",
319
+ "mm_projector.net.0.b2.se.fc1.bias": "model-00003-of-00003.safetensors",
320
+ "mm_projector.net.0.b2.se.fc1.weight": "model-00003-of-00003.safetensors",
321
+ "mm_projector.net.0.b2.se.fc2.bias": "model-00003-of-00003.safetensors",
322
+ "mm_projector.net.0.b2.se.fc2.weight": "model-00003-of-00003.safetensors",
323
+ "mm_projector.net.0.b3.conv1.bn.bias": "model-00003-of-00003.safetensors",
324
+ "mm_projector.net.0.b3.conv1.bn.weight": "model-00003-of-00003.safetensors",
325
+ "mm_projector.net.0.b3.conv1.conv.weight": "model-00003-of-00003.safetensors",
326
+ "mm_projector.net.0.b3.conv2.bn.bias": "model-00003-of-00003.safetensors",
327
+ "mm_projector.net.0.b3.conv2.bn.weight": "model-00003-of-00003.safetensors",
328
+ "mm_projector.net.0.b3.conv2.conv.weight": "model-00003-of-00003.safetensors",
329
+ "mm_projector.net.0.b3.conv3.bn.bias": "model-00003-of-00003.safetensors",
330
+ "mm_projector.net.0.b3.conv3.bn.weight": "model-00003-of-00003.safetensors",
331
+ "mm_projector.net.0.b3.conv3.conv.weight": "model-00003-of-00003.safetensors",
332
+ "mm_projector.net.0.b3.se.fc1.bias": "model-00003-of-00003.safetensors",
333
+ "mm_projector.net.0.b3.se.fc1.weight": "model-00003-of-00003.safetensors",
334
+ "mm_projector.net.0.b3.se.fc2.bias": "model-00003-of-00003.safetensors",
335
+ "mm_projector.net.0.b3.se.fc2.weight": "model-00003-of-00003.safetensors",
336
+ "mm_projector.net.2.b1.conv1.bn.bias": "model-00003-of-00003.safetensors",
337
+ "mm_projector.net.2.b1.conv1.bn.weight": "model-00003-of-00003.safetensors",
338
+ "mm_projector.net.2.b1.conv1.conv.weight": "model-00003-of-00003.safetensors",
339
+ "mm_projector.net.2.b1.conv2.bn.bias": "model-00003-of-00003.safetensors",
340
+ "mm_projector.net.2.b1.conv2.bn.weight": "model-00003-of-00003.safetensors",
341
+ "mm_projector.net.2.b1.conv2.conv.weight": "model-00003-of-00003.safetensors",
342
+ "mm_projector.net.2.b1.conv3.bn.bias": "model-00003-of-00003.safetensors",
343
+ "mm_projector.net.2.b1.conv3.bn.weight": "model-00003-of-00003.safetensors",
344
+ "mm_projector.net.2.b1.conv3.conv.weight": "model-00003-of-00003.safetensors",
345
+ "mm_projector.net.2.b1.se.fc1.bias": "model-00003-of-00003.safetensors",
346
+ "mm_projector.net.2.b1.se.fc1.weight": "model-00003-of-00003.safetensors",
347
+ "mm_projector.net.2.b1.se.fc2.bias": "model-00003-of-00003.safetensors",
348
+ "mm_projector.net.2.b1.se.fc2.weight": "model-00003-of-00003.safetensors",
349
+ "mm_projector.net.2.b2.conv1.bn.bias": "model-00003-of-00003.safetensors",
350
+ "mm_projector.net.2.b2.conv1.bn.weight": "model-00003-of-00003.safetensors",
351
+ "mm_projector.net.2.b2.conv1.conv.weight": "model-00003-of-00003.safetensors",
352
+ "mm_projector.net.2.b2.conv2.bn.bias": "model-00003-of-00003.safetensors",
353
+ "mm_projector.net.2.b2.conv2.bn.weight": "model-00003-of-00003.safetensors",
354
+ "mm_projector.net.2.b2.conv2.conv.weight": "model-00003-of-00003.safetensors",
355
+ "mm_projector.net.2.b2.conv3.bn.bias": "model-00003-of-00003.safetensors",
356
+ "mm_projector.net.2.b2.conv3.bn.weight": "model-00003-of-00003.safetensors",
357
+ "mm_projector.net.2.b2.conv3.conv.weight": "model-00003-of-00003.safetensors",
358
+ "mm_projector.net.2.b2.se.fc1.bias": "model-00003-of-00003.safetensors",
359
+ "mm_projector.net.2.b2.se.fc1.weight": "model-00003-of-00003.safetensors",
360
+ "mm_projector.net.2.b2.se.fc2.bias": "model-00003-of-00003.safetensors",
361
+ "mm_projector.net.2.b2.se.fc2.weight": "model-00003-of-00003.safetensors",
362
+ "mm_projector.net.2.b3.conv1.bn.bias": "model-00003-of-00003.safetensors",
363
+ "mm_projector.net.2.b3.conv1.bn.weight": "model-00003-of-00003.safetensors",
364
+ "mm_projector.net.2.b3.conv1.conv.weight": "model-00003-of-00003.safetensors",
365
+ "mm_projector.net.2.b3.conv2.bn.bias": "model-00003-of-00003.safetensors",
366
+ "mm_projector.net.2.b3.conv2.bn.weight": "model-00003-of-00003.safetensors",
367
+ "mm_projector.net.2.b3.conv2.conv.weight": "model-00003-of-00003.safetensors",
368
+ "mm_projector.net.2.b3.conv3.bn.bias": "model-00003-of-00003.safetensors",
369
+ "mm_projector.net.2.b3.conv3.bn.weight": "model-00003-of-00003.safetensors",
370
+ "mm_projector.net.2.b3.conv3.conv.weight": "model-00003-of-00003.safetensors",
371
+ "mm_projector.net.2.b3.se.fc1.bias": "model-00003-of-00003.safetensors",
372
+ "mm_projector.net.2.b3.se.fc1.weight": "model-00003-of-00003.safetensors",
373
+ "mm_projector.net.2.b3.se.fc2.bias": "model-00003-of-00003.safetensors",
374
+ "mm_projector.net.2.b3.se.fc2.weight": "model-00003-of-00003.safetensors",
375
+ "mm_projector.pos_emb": "model-00003-of-00003.safetensors",
376
+ "mm_projector.readout.0.bias": "model-00003-of-00003.safetensors",
377
+ "mm_projector.readout.0.weight": "model-00003-of-00003.safetensors",
378
+ "mm_projector.readout.2.bias": "model-00003-of-00003.safetensors",
379
+ "mm_projector.readout.2.weight": "model-00003-of-00003.safetensors",
380
+ "vision_model.vision_model.embeddings.patch_embedding.bias": "model-00001-of-00003.safetensors",
381
+ "vision_model.vision_model.embeddings.patch_embedding.weight": "model-00001-of-00003.safetensors",
382
+ "vision_model.vision_model.embeddings.position_embedding.weight": "model-00001-of-00003.safetensors",
383
+ "vision_model.vision_model.encoder.layers.0.layer_norm1.bias": "model-00001-of-00003.safetensors",
384
+ "vision_model.vision_model.encoder.layers.0.layer_norm1.weight": "model-00001-of-00003.safetensors",
385
+ "vision_model.vision_model.encoder.layers.0.layer_norm2.bias": "model-00001-of-00003.safetensors",
386
+ "vision_model.vision_model.encoder.layers.0.layer_norm2.weight": "model-00001-of-00003.safetensors",
387
+ "vision_model.vision_model.encoder.layers.0.mlp.fc1.bias": "model-00001-of-00003.safetensors",
388
+ "vision_model.vision_model.encoder.layers.0.mlp.fc1.weight": "model-00001-of-00003.safetensors",
389
+ "vision_model.vision_model.encoder.layers.0.mlp.fc2.bias": "model-00001-of-00003.safetensors",
390
+ "vision_model.vision_model.encoder.layers.0.mlp.fc2.weight": "model-00001-of-00003.safetensors",
391
+ "vision_model.vision_model.encoder.layers.0.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
392
+ "vision_model.vision_model.encoder.layers.0.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
393
+ "vision_model.vision_model.encoder.layers.0.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
394
+ "vision_model.vision_model.encoder.layers.0.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
395
+ "vision_model.vision_model.encoder.layers.0.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
396
+ "vision_model.vision_model.encoder.layers.0.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
397
+ "vision_model.vision_model.encoder.layers.0.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
398
+ "vision_model.vision_model.encoder.layers.0.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
399
+ "vision_model.vision_model.encoder.layers.1.layer_norm1.bias": "model-00001-of-00003.safetensors",
400
+ "vision_model.vision_model.encoder.layers.1.layer_norm1.weight": "model-00001-of-00003.safetensors",
401
+ "vision_model.vision_model.encoder.layers.1.layer_norm2.bias": "model-00001-of-00003.safetensors",
402
+ "vision_model.vision_model.encoder.layers.1.layer_norm2.weight": "model-00001-of-00003.safetensors",
403
+ "vision_model.vision_model.encoder.layers.1.mlp.fc1.bias": "model-00001-of-00003.safetensors",
404
+ "vision_model.vision_model.encoder.layers.1.mlp.fc1.weight": "model-00001-of-00003.safetensors",
405
+ "vision_model.vision_model.encoder.layers.1.mlp.fc2.bias": "model-00001-of-00003.safetensors",
406
+ "vision_model.vision_model.encoder.layers.1.mlp.fc2.weight": "model-00001-of-00003.safetensors",
407
+ "vision_model.vision_model.encoder.layers.1.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
408
+ "vision_model.vision_model.encoder.layers.1.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
409
+ "vision_model.vision_model.encoder.layers.1.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
410
+ "vision_model.vision_model.encoder.layers.1.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
411
+ "vision_model.vision_model.encoder.layers.1.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
412
+ "vision_model.vision_model.encoder.layers.1.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
413
+ "vision_model.vision_model.encoder.layers.1.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
414
+ "vision_model.vision_model.encoder.layers.1.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
415
+ "vision_model.vision_model.encoder.layers.10.layer_norm1.bias": "model-00001-of-00003.safetensors",
416
+ "vision_model.vision_model.encoder.layers.10.layer_norm1.weight": "model-00001-of-00003.safetensors",
417
+ "vision_model.vision_model.encoder.layers.10.layer_norm2.bias": "model-00001-of-00003.safetensors",
418
+ "vision_model.vision_model.encoder.layers.10.layer_norm2.weight": "model-00001-of-00003.safetensors",
419
+ "vision_model.vision_model.encoder.layers.10.mlp.fc1.bias": "model-00001-of-00003.safetensors",
420
+ "vision_model.vision_model.encoder.layers.10.mlp.fc1.weight": "model-00001-of-00003.safetensors",
421
+ "vision_model.vision_model.encoder.layers.10.mlp.fc2.bias": "model-00001-of-00003.safetensors",
422
+ "vision_model.vision_model.encoder.layers.10.mlp.fc2.weight": "model-00001-of-00003.safetensors",
423
+ "vision_model.vision_model.encoder.layers.10.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
424
+ "vision_model.vision_model.encoder.layers.10.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
425
+ "vision_model.vision_model.encoder.layers.10.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
426
+ "vision_model.vision_model.encoder.layers.10.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
427
+ "vision_model.vision_model.encoder.layers.10.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
428
+ "vision_model.vision_model.encoder.layers.10.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
429
+ "vision_model.vision_model.encoder.layers.10.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
430
+ "vision_model.vision_model.encoder.layers.10.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
431
+ "vision_model.vision_model.encoder.layers.11.layer_norm1.bias": "model-00001-of-00003.safetensors",
432
+ "vision_model.vision_model.encoder.layers.11.layer_norm1.weight": "model-00001-of-00003.safetensors",
433
+ "vision_model.vision_model.encoder.layers.11.layer_norm2.bias": "model-00001-of-00003.safetensors",
434
+ "vision_model.vision_model.encoder.layers.11.layer_norm2.weight": "model-00001-of-00003.safetensors",
435
+ "vision_model.vision_model.encoder.layers.11.mlp.fc1.bias": "model-00001-of-00003.safetensors",
436
+ "vision_model.vision_model.encoder.layers.11.mlp.fc1.weight": "model-00001-of-00003.safetensors",
437
+ "vision_model.vision_model.encoder.layers.11.mlp.fc2.bias": "model-00001-of-00003.safetensors",
438
+ "vision_model.vision_model.encoder.layers.11.mlp.fc2.weight": "model-00001-of-00003.safetensors",
439
+ "vision_model.vision_model.encoder.layers.11.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
440
+ "vision_model.vision_model.encoder.layers.11.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
441
+ "vision_model.vision_model.encoder.layers.11.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
442
+ "vision_model.vision_model.encoder.layers.11.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
443
+ "vision_model.vision_model.encoder.layers.11.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
444
+ "vision_model.vision_model.encoder.layers.11.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
445
+ "vision_model.vision_model.encoder.layers.11.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
446
+ "vision_model.vision_model.encoder.layers.11.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
447
+ "vision_model.vision_model.encoder.layers.12.layer_norm1.bias": "model-00001-of-00003.safetensors",
448
+ "vision_model.vision_model.encoder.layers.12.layer_norm1.weight": "model-00001-of-00003.safetensors",
449
+ "vision_model.vision_model.encoder.layers.12.layer_norm2.bias": "model-00001-of-00003.safetensors",
450
+ "vision_model.vision_model.encoder.layers.12.layer_norm2.weight": "model-00001-of-00003.safetensors",
451
+ "vision_model.vision_model.encoder.layers.12.mlp.fc1.bias": "model-00001-of-00003.safetensors",
452
+ "vision_model.vision_model.encoder.layers.12.mlp.fc1.weight": "model-00001-of-00003.safetensors",
453
+ "vision_model.vision_model.encoder.layers.12.mlp.fc2.bias": "model-00001-of-00003.safetensors",
454
+ "vision_model.vision_model.encoder.layers.12.mlp.fc2.weight": "model-00001-of-00003.safetensors",
455
+ "vision_model.vision_model.encoder.layers.12.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
456
+ "vision_model.vision_model.encoder.layers.12.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
457
+ "vision_model.vision_model.encoder.layers.12.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
458
+ "vision_model.vision_model.encoder.layers.12.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
459
+ "vision_model.vision_model.encoder.layers.12.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
460
+ "vision_model.vision_model.encoder.layers.12.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
461
+ "vision_model.vision_model.encoder.layers.12.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
462
+ "vision_model.vision_model.encoder.layers.12.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
463
+ "vision_model.vision_model.encoder.layers.13.layer_norm1.bias": "model-00001-of-00003.safetensors",
464
+ "vision_model.vision_model.encoder.layers.13.layer_norm1.weight": "model-00001-of-00003.safetensors",
465
+ "vision_model.vision_model.encoder.layers.13.layer_norm2.bias": "model-00001-of-00003.safetensors",
466
+ "vision_model.vision_model.encoder.layers.13.layer_norm2.weight": "model-00001-of-00003.safetensors",
467
+ "vision_model.vision_model.encoder.layers.13.mlp.fc1.bias": "model-00001-of-00003.safetensors",
468
+ "vision_model.vision_model.encoder.layers.13.mlp.fc1.weight": "model-00001-of-00003.safetensors",
469
+ "vision_model.vision_model.encoder.layers.13.mlp.fc2.bias": "model-00001-of-00003.safetensors",
470
+ "vision_model.vision_model.encoder.layers.13.mlp.fc2.weight": "model-00001-of-00003.safetensors",
471
+ "vision_model.vision_model.encoder.layers.13.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
472
+ "vision_model.vision_model.encoder.layers.13.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
473
+ "vision_model.vision_model.encoder.layers.13.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
474
+ "vision_model.vision_model.encoder.layers.13.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
475
+ "vision_model.vision_model.encoder.layers.13.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
476
+ "vision_model.vision_model.encoder.layers.13.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
477
+ "vision_model.vision_model.encoder.layers.13.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
478
+ "vision_model.vision_model.encoder.layers.13.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
479
+ "vision_model.vision_model.encoder.layers.14.layer_norm1.bias": "model-00001-of-00003.safetensors",
480
+ "vision_model.vision_model.encoder.layers.14.layer_norm1.weight": "model-00001-of-00003.safetensors",
481
+ "vision_model.vision_model.encoder.layers.14.layer_norm2.bias": "model-00001-of-00003.safetensors",
482
+ "vision_model.vision_model.encoder.layers.14.layer_norm2.weight": "model-00001-of-00003.safetensors",
483
+ "vision_model.vision_model.encoder.layers.14.mlp.fc1.bias": "model-00001-of-00003.safetensors",
484
+ "vision_model.vision_model.encoder.layers.14.mlp.fc1.weight": "model-00001-of-00003.safetensors",
485
+ "vision_model.vision_model.encoder.layers.14.mlp.fc2.bias": "model-00001-of-00003.safetensors",
486
+ "vision_model.vision_model.encoder.layers.14.mlp.fc2.weight": "model-00001-of-00003.safetensors",
487
+ "vision_model.vision_model.encoder.layers.14.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
488
+ "vision_model.vision_model.encoder.layers.14.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
489
+ "vision_model.vision_model.encoder.layers.14.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
490
+ "vision_model.vision_model.encoder.layers.14.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
491
+ "vision_model.vision_model.encoder.layers.14.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
492
+ "vision_model.vision_model.encoder.layers.14.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
493
+ "vision_model.vision_model.encoder.layers.14.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
494
+ "vision_model.vision_model.encoder.layers.14.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
495
+ "vision_model.vision_model.encoder.layers.15.layer_norm1.bias": "model-00001-of-00003.safetensors",
496
+ "vision_model.vision_model.encoder.layers.15.layer_norm1.weight": "model-00001-of-00003.safetensors",
497
+ "vision_model.vision_model.encoder.layers.15.layer_norm2.bias": "model-00001-of-00003.safetensors",
498
+ "vision_model.vision_model.encoder.layers.15.layer_norm2.weight": "model-00001-of-00003.safetensors",
499
+ "vision_model.vision_model.encoder.layers.15.mlp.fc1.bias": "model-00001-of-00003.safetensors",
500
+ "vision_model.vision_model.encoder.layers.15.mlp.fc1.weight": "model-00001-of-00003.safetensors",
501
+ "vision_model.vision_model.encoder.layers.15.mlp.fc2.bias": "model-00001-of-00003.safetensors",
502
+ "vision_model.vision_model.encoder.layers.15.mlp.fc2.weight": "model-00001-of-00003.safetensors",
503
+ "vision_model.vision_model.encoder.layers.15.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
504
+ "vision_model.vision_model.encoder.layers.15.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
505
+ "vision_model.vision_model.encoder.layers.15.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
506
+ "vision_model.vision_model.encoder.layers.15.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
507
+ "vision_model.vision_model.encoder.layers.15.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
508
+ "vision_model.vision_model.encoder.layers.15.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
509
+ "vision_model.vision_model.encoder.layers.15.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
510
+ "vision_model.vision_model.encoder.layers.15.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
511
+ "vision_model.vision_model.encoder.layers.16.layer_norm1.bias": "model-00001-of-00003.safetensors",
512
+ "vision_model.vision_model.encoder.layers.16.layer_norm1.weight": "model-00001-of-00003.safetensors",
513
+ "vision_model.vision_model.encoder.layers.16.layer_norm2.bias": "model-00001-of-00003.safetensors",
514
+ "vision_model.vision_model.encoder.layers.16.layer_norm2.weight": "model-00001-of-00003.safetensors",
515
+ "vision_model.vision_model.encoder.layers.16.mlp.fc1.bias": "model-00001-of-00003.safetensors",
516
+ "vision_model.vision_model.encoder.layers.16.mlp.fc1.weight": "model-00001-of-00003.safetensors",
517
+ "vision_model.vision_model.encoder.layers.16.mlp.fc2.bias": "model-00001-of-00003.safetensors",
518
+ "vision_model.vision_model.encoder.layers.16.mlp.fc2.weight": "model-00001-of-00003.safetensors",
519
+ "vision_model.vision_model.encoder.layers.16.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
520
+ "vision_model.vision_model.encoder.layers.16.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
521
+ "vision_model.vision_model.encoder.layers.16.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
522
+ "vision_model.vision_model.encoder.layers.16.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
523
+ "vision_model.vision_model.encoder.layers.16.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
524
+ "vision_model.vision_model.encoder.layers.16.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
525
+ "vision_model.vision_model.encoder.layers.16.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
526
+ "vision_model.vision_model.encoder.layers.16.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
527
+ "vision_model.vision_model.encoder.layers.17.layer_norm1.bias": "model-00001-of-00003.safetensors",
528
+ "vision_model.vision_model.encoder.layers.17.layer_norm1.weight": "model-00001-of-00003.safetensors",
529
+ "vision_model.vision_model.encoder.layers.17.layer_norm2.bias": "model-00001-of-00003.safetensors",
530
+ "vision_model.vision_model.encoder.layers.17.layer_norm2.weight": "model-00001-of-00003.safetensors",
531
+ "vision_model.vision_model.encoder.layers.17.mlp.fc1.bias": "model-00001-of-00003.safetensors",
532
+ "vision_model.vision_model.encoder.layers.17.mlp.fc1.weight": "model-00001-of-00003.safetensors",
533
+ "vision_model.vision_model.encoder.layers.17.mlp.fc2.bias": "model-00001-of-00003.safetensors",
534
+ "vision_model.vision_model.encoder.layers.17.mlp.fc2.weight": "model-00001-of-00003.safetensors",
535
+ "vision_model.vision_model.encoder.layers.17.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
536
+ "vision_model.vision_model.encoder.layers.17.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
537
+ "vision_model.vision_model.encoder.layers.17.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
538
+ "vision_model.vision_model.encoder.layers.17.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
539
+ "vision_model.vision_model.encoder.layers.17.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
540
+ "vision_model.vision_model.encoder.layers.17.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
541
+ "vision_model.vision_model.encoder.layers.17.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
542
+ "vision_model.vision_model.encoder.layers.17.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
543
+ "vision_model.vision_model.encoder.layers.18.layer_norm1.bias": "model-00001-of-00003.safetensors",
544
+ "vision_model.vision_model.encoder.layers.18.layer_norm1.weight": "model-00001-of-00003.safetensors",
545
+ "vision_model.vision_model.encoder.layers.18.layer_norm2.bias": "model-00001-of-00003.safetensors",
546
+ "vision_model.vision_model.encoder.layers.18.layer_norm2.weight": "model-00001-of-00003.safetensors",
547
+ "vision_model.vision_model.encoder.layers.18.mlp.fc1.bias": "model-00001-of-00003.safetensors",
548
+ "vision_model.vision_model.encoder.layers.18.mlp.fc1.weight": "model-00001-of-00003.safetensors",
549
+ "vision_model.vision_model.encoder.layers.18.mlp.fc2.bias": "model-00001-of-00003.safetensors",
550
+ "vision_model.vision_model.encoder.layers.18.mlp.fc2.weight": "model-00001-of-00003.safetensors",
551
+ "vision_model.vision_model.encoder.layers.18.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
552
+ "vision_model.vision_model.encoder.layers.18.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
553
+ "vision_model.vision_model.encoder.layers.18.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
554
+ "vision_model.vision_model.encoder.layers.18.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
555
+ "vision_model.vision_model.encoder.layers.18.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
556
+ "vision_model.vision_model.encoder.layers.18.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
557
+ "vision_model.vision_model.encoder.layers.18.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
558
+ "vision_model.vision_model.encoder.layers.18.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
559
+ "vision_model.vision_model.encoder.layers.19.layer_norm1.bias": "model-00001-of-00003.safetensors",
560
+ "vision_model.vision_model.encoder.layers.19.layer_norm1.weight": "model-00001-of-00003.safetensors",
561
+ "vision_model.vision_model.encoder.layers.19.layer_norm2.bias": "model-00001-of-00003.safetensors",
562
+ "vision_model.vision_model.encoder.layers.19.layer_norm2.weight": "model-00001-of-00003.safetensors",
563
+ "vision_model.vision_model.encoder.layers.19.mlp.fc1.bias": "model-00001-of-00003.safetensors",
564
+ "vision_model.vision_model.encoder.layers.19.mlp.fc1.weight": "model-00001-of-00003.safetensors",
565
+ "vision_model.vision_model.encoder.layers.19.mlp.fc2.bias": "model-00001-of-00003.safetensors",
566
+ "vision_model.vision_model.encoder.layers.19.mlp.fc2.weight": "model-00001-of-00003.safetensors",
567
+ "vision_model.vision_model.encoder.layers.19.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
568
+ "vision_model.vision_model.encoder.layers.19.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
569
+ "vision_model.vision_model.encoder.layers.19.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
570
+ "vision_model.vision_model.encoder.layers.19.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
571
+ "vision_model.vision_model.encoder.layers.19.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
572
+ "vision_model.vision_model.encoder.layers.19.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
573
+ "vision_model.vision_model.encoder.layers.19.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
574
+ "vision_model.vision_model.encoder.layers.19.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
575
+ "vision_model.vision_model.encoder.layers.2.layer_norm1.bias": "model-00001-of-00003.safetensors",
576
+ "vision_model.vision_model.encoder.layers.2.layer_norm1.weight": "model-00001-of-00003.safetensors",
577
+ "vision_model.vision_model.encoder.layers.2.layer_norm2.bias": "model-00001-of-00003.safetensors",
578
+ "vision_model.vision_model.encoder.layers.2.layer_norm2.weight": "model-00001-of-00003.safetensors",
579
+ "vision_model.vision_model.encoder.layers.2.mlp.fc1.bias": "model-00001-of-00003.safetensors",
580
+ "vision_model.vision_model.encoder.layers.2.mlp.fc1.weight": "model-00001-of-00003.safetensors",
581
+ "vision_model.vision_model.encoder.layers.2.mlp.fc2.bias": "model-00001-of-00003.safetensors",
582
+ "vision_model.vision_model.encoder.layers.2.mlp.fc2.weight": "model-00001-of-00003.safetensors",
583
+ "vision_model.vision_model.encoder.layers.2.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
584
+ "vision_model.vision_model.encoder.layers.2.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
585
+ "vision_model.vision_model.encoder.layers.2.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
586
+ "vision_model.vision_model.encoder.layers.2.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
587
+ "vision_model.vision_model.encoder.layers.2.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
588
+ "vision_model.vision_model.encoder.layers.2.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
589
+ "vision_model.vision_model.encoder.layers.2.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
590
+ "vision_model.vision_model.encoder.layers.2.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
591
+ "vision_model.vision_model.encoder.layers.20.layer_norm1.bias": "model-00001-of-00003.safetensors",
592
+ "vision_model.vision_model.encoder.layers.20.layer_norm1.weight": "model-00001-of-00003.safetensors",
593
+ "vision_model.vision_model.encoder.layers.20.layer_norm2.bias": "model-00001-of-00003.safetensors",
594
+ "vision_model.vision_model.encoder.layers.20.layer_norm2.weight": "model-00001-of-00003.safetensors",
595
+ "vision_model.vision_model.encoder.layers.20.mlp.fc1.bias": "model-00001-of-00003.safetensors",
596
+ "vision_model.vision_model.encoder.layers.20.mlp.fc1.weight": "model-00001-of-00003.safetensors",
597
+ "vision_model.vision_model.encoder.layers.20.mlp.fc2.bias": "model-00001-of-00003.safetensors",
598
+ "vision_model.vision_model.encoder.layers.20.mlp.fc2.weight": "model-00001-of-00003.safetensors",
599
+ "vision_model.vision_model.encoder.layers.20.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
600
+ "vision_model.vision_model.encoder.layers.20.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
601
+ "vision_model.vision_model.encoder.layers.20.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
602
+ "vision_model.vision_model.encoder.layers.20.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
603
+ "vision_model.vision_model.encoder.layers.20.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
604
+ "vision_model.vision_model.encoder.layers.20.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
605
+ "vision_model.vision_model.encoder.layers.20.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
606
+ "vision_model.vision_model.encoder.layers.20.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
607
+ "vision_model.vision_model.encoder.layers.21.layer_norm1.bias": "model-00001-of-00003.safetensors",
608
+ "vision_model.vision_model.encoder.layers.21.layer_norm1.weight": "model-00001-of-00003.safetensors",
609
+ "vision_model.vision_model.encoder.layers.21.layer_norm2.bias": "model-00001-of-00003.safetensors",
610
+ "vision_model.vision_model.encoder.layers.21.layer_norm2.weight": "model-00001-of-00003.safetensors",
611
+ "vision_model.vision_model.encoder.layers.21.mlp.fc1.bias": "model-00001-of-00003.safetensors",
612
+ "vision_model.vision_model.encoder.layers.21.mlp.fc1.weight": "model-00001-of-00003.safetensors",
613
+ "vision_model.vision_model.encoder.layers.21.mlp.fc2.bias": "model-00001-of-00003.safetensors",
614
+ "vision_model.vision_model.encoder.layers.21.mlp.fc2.weight": "model-00001-of-00003.safetensors",
615
+ "vision_model.vision_model.encoder.layers.21.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
616
+ "vision_model.vision_model.encoder.layers.21.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
617
+ "vision_model.vision_model.encoder.layers.21.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
618
+ "vision_model.vision_model.encoder.layers.21.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
619
+ "vision_model.vision_model.encoder.layers.21.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
620
+ "vision_model.vision_model.encoder.layers.21.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
621
+ "vision_model.vision_model.encoder.layers.21.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
622
+ "vision_model.vision_model.encoder.layers.21.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
623
+ "vision_model.vision_model.encoder.layers.22.layer_norm1.bias": "model-00001-of-00003.safetensors",
624
+ "vision_model.vision_model.encoder.layers.22.layer_norm1.weight": "model-00001-of-00003.safetensors",
625
+ "vision_model.vision_model.encoder.layers.22.layer_norm2.bias": "model-00001-of-00003.safetensors",
626
+ "vision_model.vision_model.encoder.layers.22.layer_norm2.weight": "model-00001-of-00003.safetensors",
627
+ "vision_model.vision_model.encoder.layers.22.mlp.fc1.bias": "model-00001-of-00003.safetensors",
628
+ "vision_model.vision_model.encoder.layers.22.mlp.fc1.weight": "model-00001-of-00003.safetensors",
629
+ "vision_model.vision_model.encoder.layers.22.mlp.fc2.bias": "model-00001-of-00003.safetensors",
630
+ "vision_model.vision_model.encoder.layers.22.mlp.fc2.weight": "model-00001-of-00003.safetensors",
631
+ "vision_model.vision_model.encoder.layers.22.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
632
+ "vision_model.vision_model.encoder.layers.22.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
633
+ "vision_model.vision_model.encoder.layers.22.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
634
+ "vision_model.vision_model.encoder.layers.22.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
635
+ "vision_model.vision_model.encoder.layers.22.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
636
+ "vision_model.vision_model.encoder.layers.22.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
637
+ "vision_model.vision_model.encoder.layers.22.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
638
+ "vision_model.vision_model.encoder.layers.22.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
639
+ "vision_model.vision_model.encoder.layers.23.layer_norm1.bias": "model-00001-of-00003.safetensors",
640
+ "vision_model.vision_model.encoder.layers.23.layer_norm1.weight": "model-00001-of-00003.safetensors",
641
+ "vision_model.vision_model.encoder.layers.23.layer_norm2.bias": "model-00001-of-00003.safetensors",
642
+ "vision_model.vision_model.encoder.layers.23.layer_norm2.weight": "model-00001-of-00003.safetensors",
643
+ "vision_model.vision_model.encoder.layers.23.mlp.fc1.bias": "model-00001-of-00003.safetensors",
644
+ "vision_model.vision_model.encoder.layers.23.mlp.fc1.weight": "model-00001-of-00003.safetensors",
645
+ "vision_model.vision_model.encoder.layers.23.mlp.fc2.bias": "model-00001-of-00003.safetensors",
646
+ "vision_model.vision_model.encoder.layers.23.mlp.fc2.weight": "model-00001-of-00003.safetensors",
647
+ "vision_model.vision_model.encoder.layers.23.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
648
+ "vision_model.vision_model.encoder.layers.23.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
649
+ "vision_model.vision_model.encoder.layers.23.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
650
+ "vision_model.vision_model.encoder.layers.23.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
651
+ "vision_model.vision_model.encoder.layers.23.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
652
+ "vision_model.vision_model.encoder.layers.23.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
653
+ "vision_model.vision_model.encoder.layers.23.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
654
+ "vision_model.vision_model.encoder.layers.23.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
655
+ "vision_model.vision_model.encoder.layers.24.layer_norm1.bias": "model-00001-of-00003.safetensors",
656
+ "vision_model.vision_model.encoder.layers.24.layer_norm1.weight": "model-00001-of-00003.safetensors",
657
+ "vision_model.vision_model.encoder.layers.24.layer_norm2.bias": "model-00001-of-00003.safetensors",
658
+ "vision_model.vision_model.encoder.layers.24.layer_norm2.weight": "model-00001-of-00003.safetensors",
659
+ "vision_model.vision_model.encoder.layers.24.mlp.fc1.bias": "model-00001-of-00003.safetensors",
660
+ "vision_model.vision_model.encoder.layers.24.mlp.fc1.weight": "model-00001-of-00003.safetensors",
661
+ "vision_model.vision_model.encoder.layers.24.mlp.fc2.bias": "model-00001-of-00003.safetensors",
662
+ "vision_model.vision_model.encoder.layers.24.mlp.fc2.weight": "model-00001-of-00003.safetensors",
663
+ "vision_model.vision_model.encoder.layers.24.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
664
+ "vision_model.vision_model.encoder.layers.24.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
665
+ "vision_model.vision_model.encoder.layers.24.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
666
+ "vision_model.vision_model.encoder.layers.24.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
667
+ "vision_model.vision_model.encoder.layers.24.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
668
+ "vision_model.vision_model.encoder.layers.24.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
669
+ "vision_model.vision_model.encoder.layers.24.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
670
+ "vision_model.vision_model.encoder.layers.24.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
671
+ "vision_model.vision_model.encoder.layers.25.layer_norm1.bias": "model-00001-of-00003.safetensors",
672
+ "vision_model.vision_model.encoder.layers.25.layer_norm1.weight": "model-00001-of-00003.safetensors",
673
+ "vision_model.vision_model.encoder.layers.25.layer_norm2.bias": "model-00001-of-00003.safetensors",
674
+ "vision_model.vision_model.encoder.layers.25.layer_norm2.weight": "model-00001-of-00003.safetensors",
675
+ "vision_model.vision_model.encoder.layers.25.mlp.fc1.bias": "model-00001-of-00003.safetensors",
676
+ "vision_model.vision_model.encoder.layers.25.mlp.fc1.weight": "model-00001-of-00003.safetensors",
677
+ "vision_model.vision_model.encoder.layers.25.mlp.fc2.bias": "model-00001-of-00003.safetensors",
678
+ "vision_model.vision_model.encoder.layers.25.mlp.fc2.weight": "model-00001-of-00003.safetensors",
679
+ "vision_model.vision_model.encoder.layers.25.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
680
+ "vision_model.vision_model.encoder.layers.25.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
681
+ "vision_model.vision_model.encoder.layers.25.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
682
+ "vision_model.vision_model.encoder.layers.25.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
683
+ "vision_model.vision_model.encoder.layers.25.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
684
+ "vision_model.vision_model.encoder.layers.25.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
685
+ "vision_model.vision_model.encoder.layers.25.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
686
+ "vision_model.vision_model.encoder.layers.25.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
687
+ "vision_model.vision_model.encoder.layers.26.layer_norm1.bias": "model-00001-of-00003.safetensors",
688
+ "vision_model.vision_model.encoder.layers.26.layer_norm1.weight": "model-00001-of-00003.safetensors",
689
+ "vision_model.vision_model.encoder.layers.26.layer_norm2.bias": "model-00001-of-00003.safetensors",
690
+ "vision_model.vision_model.encoder.layers.26.layer_norm2.weight": "model-00001-of-00003.safetensors",
691
+ "vision_model.vision_model.encoder.layers.26.mlp.fc1.bias": "model-00001-of-00003.safetensors",
692
+ "vision_model.vision_model.encoder.layers.26.mlp.fc1.weight": "model-00001-of-00003.safetensors",
693
+ "vision_model.vision_model.encoder.layers.26.mlp.fc2.bias": "model-00001-of-00003.safetensors",
694
+ "vision_model.vision_model.encoder.layers.26.mlp.fc2.weight": "model-00001-of-00003.safetensors",
695
+ "vision_model.vision_model.encoder.layers.26.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
696
+ "vision_model.vision_model.encoder.layers.26.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
697
+ "vision_model.vision_model.encoder.layers.26.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
698
+ "vision_model.vision_model.encoder.layers.26.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
699
+ "vision_model.vision_model.encoder.layers.26.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
700
+ "vision_model.vision_model.encoder.layers.26.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
701
+ "vision_model.vision_model.encoder.layers.26.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
702
+ "vision_model.vision_model.encoder.layers.26.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
703
+ "vision_model.vision_model.encoder.layers.3.layer_norm1.bias": "model-00001-of-00003.safetensors",
704
+ "vision_model.vision_model.encoder.layers.3.layer_norm1.weight": "model-00001-of-00003.safetensors",
705
+ "vision_model.vision_model.encoder.layers.3.layer_norm2.bias": "model-00001-of-00003.safetensors",
706
+ "vision_model.vision_model.encoder.layers.3.layer_norm2.weight": "model-00001-of-00003.safetensors",
707
+ "vision_model.vision_model.encoder.layers.3.mlp.fc1.bias": "model-00001-of-00003.safetensors",
708
+ "vision_model.vision_model.encoder.layers.3.mlp.fc1.weight": "model-00001-of-00003.safetensors",
709
+ "vision_model.vision_model.encoder.layers.3.mlp.fc2.bias": "model-00001-of-00003.safetensors",
710
+ "vision_model.vision_model.encoder.layers.3.mlp.fc2.weight": "model-00001-of-00003.safetensors",
711
+ "vision_model.vision_model.encoder.layers.3.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
712
+ "vision_model.vision_model.encoder.layers.3.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
713
+ "vision_model.vision_model.encoder.layers.3.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
714
+ "vision_model.vision_model.encoder.layers.3.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
715
+ "vision_model.vision_model.encoder.layers.3.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
716
+ "vision_model.vision_model.encoder.layers.3.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
717
+ "vision_model.vision_model.encoder.layers.3.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
718
+ "vision_model.vision_model.encoder.layers.3.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
719
+ "vision_model.vision_model.encoder.layers.4.layer_norm1.bias": "model-00001-of-00003.safetensors",
720
+ "vision_model.vision_model.encoder.layers.4.layer_norm1.weight": "model-00001-of-00003.safetensors",
721
+ "vision_model.vision_model.encoder.layers.4.layer_norm2.bias": "model-00001-of-00003.safetensors",
722
+ "vision_model.vision_model.encoder.layers.4.layer_norm2.weight": "model-00001-of-00003.safetensors",
723
+ "vision_model.vision_model.encoder.layers.4.mlp.fc1.bias": "model-00001-of-00003.safetensors",
724
+ "vision_model.vision_model.encoder.layers.4.mlp.fc1.weight": "model-00001-of-00003.safetensors",
725
+ "vision_model.vision_model.encoder.layers.4.mlp.fc2.bias": "model-00001-of-00003.safetensors",
726
+ "vision_model.vision_model.encoder.layers.4.mlp.fc2.weight": "model-00001-of-00003.safetensors",
727
+ "vision_model.vision_model.encoder.layers.4.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
728
+ "vision_model.vision_model.encoder.layers.4.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
729
+ "vision_model.vision_model.encoder.layers.4.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
730
+ "vision_model.vision_model.encoder.layers.4.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
731
+ "vision_model.vision_model.encoder.layers.4.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
732
+ "vision_model.vision_model.encoder.layers.4.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
733
+ "vision_model.vision_model.encoder.layers.4.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
734
+ "vision_model.vision_model.encoder.layers.4.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
735
+ "vision_model.vision_model.encoder.layers.5.layer_norm1.bias": "model-00001-of-00003.safetensors",
736
+ "vision_model.vision_model.encoder.layers.5.layer_norm1.weight": "model-00001-of-00003.safetensors",
737
+ "vision_model.vision_model.encoder.layers.5.layer_norm2.bias": "model-00001-of-00003.safetensors",
738
+ "vision_model.vision_model.encoder.layers.5.layer_norm2.weight": "model-00001-of-00003.safetensors",
739
+ "vision_model.vision_model.encoder.layers.5.mlp.fc1.bias": "model-00001-of-00003.safetensors",
740
+ "vision_model.vision_model.encoder.layers.5.mlp.fc1.weight": "model-00001-of-00003.safetensors",
741
+ "vision_model.vision_model.encoder.layers.5.mlp.fc2.bias": "model-00001-of-00003.safetensors",
742
+ "vision_model.vision_model.encoder.layers.5.mlp.fc2.weight": "model-00001-of-00003.safetensors",
743
+ "vision_model.vision_model.encoder.layers.5.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
744
+ "vision_model.vision_model.encoder.layers.5.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
745
+ "vision_model.vision_model.encoder.layers.5.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
746
+ "vision_model.vision_model.encoder.layers.5.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
747
+ "vision_model.vision_model.encoder.layers.5.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
748
+ "vision_model.vision_model.encoder.layers.5.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
749
+ "vision_model.vision_model.encoder.layers.5.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
750
+ "vision_model.vision_model.encoder.layers.5.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
751
+ "vision_model.vision_model.encoder.layers.6.layer_norm1.bias": "model-00001-of-00003.safetensors",
752
+ "vision_model.vision_model.encoder.layers.6.layer_norm1.weight": "model-00001-of-00003.safetensors",
753
+ "vision_model.vision_model.encoder.layers.6.layer_norm2.bias": "model-00001-of-00003.safetensors",
754
+ "vision_model.vision_model.encoder.layers.6.layer_norm2.weight": "model-00001-of-00003.safetensors",
755
+ "vision_model.vision_model.encoder.layers.6.mlp.fc1.bias": "model-00001-of-00003.safetensors",
756
+ "vision_model.vision_model.encoder.layers.6.mlp.fc1.weight": "model-00001-of-00003.safetensors",
757
+ "vision_model.vision_model.encoder.layers.6.mlp.fc2.bias": "model-00001-of-00003.safetensors",
758
+ "vision_model.vision_model.encoder.layers.6.mlp.fc2.weight": "model-00001-of-00003.safetensors",
759
+ "vision_model.vision_model.encoder.layers.6.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
760
+ "vision_model.vision_model.encoder.layers.6.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
761
+ "vision_model.vision_model.encoder.layers.6.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
762
+ "vision_model.vision_model.encoder.layers.6.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
763
+ "vision_model.vision_model.encoder.layers.6.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
764
+ "vision_model.vision_model.encoder.layers.6.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
765
+ "vision_model.vision_model.encoder.layers.6.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
766
+ "vision_model.vision_model.encoder.layers.6.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
767
+ "vision_model.vision_model.encoder.layers.7.layer_norm1.bias": "model-00001-of-00003.safetensors",
768
+ "vision_model.vision_model.encoder.layers.7.layer_norm1.weight": "model-00001-of-00003.safetensors",
769
+ "vision_model.vision_model.encoder.layers.7.layer_norm2.bias": "model-00001-of-00003.safetensors",
770
+ "vision_model.vision_model.encoder.layers.7.layer_norm2.weight": "model-00001-of-00003.safetensors",
771
+ "vision_model.vision_model.encoder.layers.7.mlp.fc1.bias": "model-00001-of-00003.safetensors",
772
+ "vision_model.vision_model.encoder.layers.7.mlp.fc1.weight": "model-00001-of-00003.safetensors",
773
+ "vision_model.vision_model.encoder.layers.7.mlp.fc2.bias": "model-00001-of-00003.safetensors",
774
+ "vision_model.vision_model.encoder.layers.7.mlp.fc2.weight": "model-00001-of-00003.safetensors",
775
+ "vision_model.vision_model.encoder.layers.7.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
776
+ "vision_model.vision_model.encoder.layers.7.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
777
+ "vision_model.vision_model.encoder.layers.7.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
778
+ "vision_model.vision_model.encoder.layers.7.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
779
+ "vision_model.vision_model.encoder.layers.7.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
780
+ "vision_model.vision_model.encoder.layers.7.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
781
+ "vision_model.vision_model.encoder.layers.7.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
782
+ "vision_model.vision_model.encoder.layers.7.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
783
+ "vision_model.vision_model.encoder.layers.8.layer_norm1.bias": "model-00001-of-00003.safetensors",
784
+ "vision_model.vision_model.encoder.layers.8.layer_norm1.weight": "model-00001-of-00003.safetensors",
785
+ "vision_model.vision_model.encoder.layers.8.layer_norm2.bias": "model-00001-of-00003.safetensors",
786
+ "vision_model.vision_model.encoder.layers.8.layer_norm2.weight": "model-00001-of-00003.safetensors",
787
+ "vision_model.vision_model.encoder.layers.8.mlp.fc1.bias": "model-00001-of-00003.safetensors",
788
+ "vision_model.vision_model.encoder.layers.8.mlp.fc1.weight": "model-00001-of-00003.safetensors",
789
+ "vision_model.vision_model.encoder.layers.8.mlp.fc2.bias": "model-00001-of-00003.safetensors",
790
+ "vision_model.vision_model.encoder.layers.8.mlp.fc2.weight": "model-00001-of-00003.safetensors",
791
+ "vision_model.vision_model.encoder.layers.8.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
792
+ "vision_model.vision_model.encoder.layers.8.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
793
+ "vision_model.vision_model.encoder.layers.8.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
794
+ "vision_model.vision_model.encoder.layers.8.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
795
+ "vision_model.vision_model.encoder.layers.8.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
796
+ "vision_model.vision_model.encoder.layers.8.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
797
+ "vision_model.vision_model.encoder.layers.8.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
798
+ "vision_model.vision_model.encoder.layers.8.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
799
+ "vision_model.vision_model.encoder.layers.9.layer_norm1.bias": "model-00001-of-00003.safetensors",
800
+ "vision_model.vision_model.encoder.layers.9.layer_norm1.weight": "model-00001-of-00003.safetensors",
801
+ "vision_model.vision_model.encoder.layers.9.layer_norm2.bias": "model-00001-of-00003.safetensors",
802
+ "vision_model.vision_model.encoder.layers.9.layer_norm2.weight": "model-00001-of-00003.safetensors",
803
+ "vision_model.vision_model.encoder.layers.9.mlp.fc1.bias": "model-00001-of-00003.safetensors",
804
+ "vision_model.vision_model.encoder.layers.9.mlp.fc1.weight": "model-00001-of-00003.safetensors",
805
+ "vision_model.vision_model.encoder.layers.9.mlp.fc2.bias": "model-00001-of-00003.safetensors",
806
+ "vision_model.vision_model.encoder.layers.9.mlp.fc2.weight": "model-00001-of-00003.safetensors",
807
+ "vision_model.vision_model.encoder.layers.9.self_attn.k_proj.bias": "model-00001-of-00003.safetensors",
808
+ "vision_model.vision_model.encoder.layers.9.self_attn.k_proj.weight": "model-00001-of-00003.safetensors",
809
+ "vision_model.vision_model.encoder.layers.9.self_attn.out_proj.bias": "model-00001-of-00003.safetensors",
810
+ "vision_model.vision_model.encoder.layers.9.self_attn.out_proj.weight": "model-00001-of-00003.safetensors",
811
+ "vision_model.vision_model.encoder.layers.9.self_attn.q_proj.bias": "model-00001-of-00003.safetensors",
812
+ "vision_model.vision_model.encoder.layers.9.self_attn.q_proj.weight": "model-00001-of-00003.safetensors",
813
+ "vision_model.vision_model.encoder.layers.9.self_attn.v_proj.bias": "model-00001-of-00003.safetensors",
814
+ "vision_model.vision_model.encoder.layers.9.self_attn.v_proj.weight": "model-00001-of-00003.safetensors",
815
+ "vision_model.vision_model.head.attention.in_proj_bias": "model-00001-of-00003.safetensors",
816
+ "vision_model.vision_model.head.attention.in_proj_weight": "model-00001-of-00003.safetensors",
817
+ "vision_model.vision_model.head.attention.out_proj.bias": "model-00001-of-00003.safetensors",
818
+ "vision_model.vision_model.head.attention.out_proj.weight": "model-00001-of-00003.safetensors",
819
+ "vision_model.vision_model.head.layernorm.bias": "model-00001-of-00003.safetensors",
820
+ "vision_model.vision_model.head.layernorm.weight": "model-00001-of-00003.safetensors",
821
+ "vision_model.vision_model.head.mlp.fc1.bias": "model-00001-of-00003.safetensors",
822
+ "vision_model.vision_model.head.mlp.fc1.weight": "model-00001-of-00003.safetensors",
823
+ "vision_model.vision_model.head.mlp.fc2.bias": "model-00001-of-00003.safetensors",
824
+ "vision_model.vision_model.head.mlp.fc2.weight": "model-00001-of-00003.safetensors",
825
+ "vision_model.vision_model.head.probe": "model-00001-of-00003.safetensors",
826
+ "vision_model.vision_model.post_layernorm.bias": "model-00001-of-00003.safetensors",
827
+ "vision_model.vision_model.post_layernorm.weight": "model-00001-of-00003.safetensors"
828
+ }
829
+ }
modeling_hyperclovax.py ADDED
@@ -0,0 +1,1344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ast
2
+ import contextlib
3
+ import gc
4
+ import json
5
+ import os
6
+ from dataclasses import dataclass
7
+ from functools import partial
8
+ from itertools import chain
9
+ from typing import Any, Dict, List, Optional, Tuple, Union
10
+
11
+ import torch
12
+ import torch.distributed as dist
13
+ import torch.nn as nn
14
+ from einops import rearrange
15
+ from timm.layers import LayerNorm, LayerNorm2d
16
+ from timm.models.regnet import RegStage
17
+ from torch.nn import CrossEntropyLoss
18
+ from transformers import (
19
+ AutoConfig,
20
+ AutoModel,
21
+ AutoModelForCausalLM,
22
+ AutoTokenizer,
23
+ PreTrainedModel,
24
+ )
25
+ from transformers.generation.utils import GenerationMixin
26
+ from transformers.modeling_utils import (
27
+ is_fsdp_enabled,
28
+ is_local_dist_rank_0,
29
+ no_init_weights,
30
+ )
31
+ from transformers.models.auto import CONFIG_MAPPING
32
+ from transformers.utils import ModelOutput
33
+
34
+ from .configuration_hyperclovax import HCXVisionConfig
35
+ from .image_processing_hyperclovax import select_best_resolution
36
+
37
+ EOT = "<|endofturn|>"
38
+ IMAGE_LOC = "<|dummy3|>"
39
+ VIDEO_LOC = "<|_unuse_missing_100270|>"
40
+
41
+
42
+ def get_rank():
43
+ if dist.is_initialized():
44
+ return dist.get_rank()
45
+ return 0
46
+
47
+
48
+ def get_world_size():
49
+ if torch.distributed.is_initialized():
50
+ world_size = torch.distributed.get_world_size()
51
+ else:
52
+ world_size = 1
53
+ return world_size
54
+
55
+
56
+ def unpad_image(tensor: torch.Tensor, original_size: Tuple[int, int]) -> torch.Tensor:
57
+ """Unpads a PyTorch tensor of a padded and resized image.
58
+
59
+ This function removes padding from a tensor image that was previously padded and resized.
60
+ The padding is removed based on the aspect ratio difference between the original and current image dimensions.
61
+
62
+ Args:
63
+ tensor: The image tensor, assumed to be in CxHxW format.
64
+ original_size: The original size of the image as (width, height).
65
+
66
+ Returns:
67
+ The unpadded image tensor.
68
+
69
+ Examples:
70
+ >>> import torch
71
+ >>> # Example 1: Unpadding with height padding
72
+ >>> padded_tensor = torch.randn(1, 64, 48) # Padded tensor (C=1, H=64, W=48)
73
+ >>> original_size = (32, 32) # Original size (width=32, height=32)
74
+ >>> unpadded_tensor = unpad_image(padded_tensor, original_size)
75
+ >>> unpadded_tensor.shape
76
+ torch.Size([1, 48, 48])
77
+ >>> # Example 2: Unpadding with width padding
78
+ >>> padded_tensor = torch.randn(1, 48, 64) # Padded tensor (C=1, H=48, W=64)
79
+ >>> original_size = (32, 32) # Original size (width=32, height=32)
80
+ >>> unpadded_tensor = unpad_image(padded_tensor, original_size)
81
+ >>> unpadded_tensor.shape
82
+ torch.Size([1, 48, 48])
83
+ """
84
+ original_width, original_height = original_size
85
+ current_height, current_width = tensor.shape[1:]
86
+
87
+ original_aspect_ratio = original_width / original_height
88
+ current_aspect_ratio = current_width / current_height
89
+
90
+ if original_aspect_ratio > current_aspect_ratio:
91
+ scale_factor = current_width / original_width
92
+ new_height = int(original_height * scale_factor)
93
+ padding = (current_height - new_height) // 2
94
+ unpadded_tensor = tensor[:, padding : current_height - padding, :]
95
+ else:
96
+ scale_factor = current_height / original_height
97
+ new_width = int(original_width * scale_factor)
98
+ padding = (current_width - new_width) // 2
99
+ unpadded_tensor = tensor[:, :, padding : current_width - padding]
100
+
101
+ return unpadded_tensor
102
+
103
+
104
+ def get_anyres_image_grid_shape(
105
+ image_size: Tuple[int, int],
106
+ grid_pinpoints: Union[str, List[Tuple[int, int]]],
107
+ patch_size: int,
108
+ ) -> Tuple[int, int]:
109
+ """Calculates the image patch grid shape after any-resolution preprocessing.
110
+
111
+ Selects the optimal resolution from predefined grid pinpoints based on input image
112
+ dimensions using `select_best_resolution`, then computes the grid layout by
113
+ dividing the selected resolution by the patch size using integer division.
114
+
115
+ Args:
116
+ image_size (Tuple[int, int]): Original image dimensions in (width, height) format.
117
+ grid_pinpoints (Union[str, List[Tuple[int, int]]]): Accepts either:
118
+ - List of (height, width) resolution tuples
119
+ - String representation of list (e.g., "[(224, 224), (336, 336)]")
120
+ patch_size (int): Spatial dimension of square patches for grid division.
121
+
122
+ Returns:
123
+ Tuple[int, int]: Grid dimensions as (num_patches_width, num_patches_height).
124
+
125
+ Examples:
126
+ >>> # Basic case with list input
127
+ >>> get_anyres_image_grid_shape((1000, 800), [(224, 224), (448, 448)], 112)
128
+ (4, 4)
129
+
130
+ >>> # Basic case with string input
131
+ >>> get_anyres_image_grid_shape((600, 400), "[(336, 336), (672, 672)]", 112)
132
+ (6, 6)
133
+
134
+ >>> # Case where resolution is not perfectly divisible by patch_size
135
+ >>> # select_best_resolution picks (224, 224). 224 // 100 = 2
136
+ >>> get_anyres_image_grid_shape((500, 500), [(224, 224)], 100)
137
+ (2, 2)
138
+
139
+ >>> # Different patch size
140
+ >>> # select_best_resolution picks (448, 448). 448 // 224 = 2
141
+ >>> get_anyres_image_grid_shape((1200, 900), [(448, 448), (224, 224)], 224)
142
+ (2, 2)
143
+
144
+ Note:
145
+ String-formatted grid_pinpoints are converted via ast.literal_eval. Invalid formats
146
+ may raise syntax exceptions. The actual resolution selection depends on the
147
+ implementation of `select_best_resolution`. The doctests assume
148
+ `select_best_resolution` picks the *first* resolution provided in `grid_pinpoints`.
149
+ """
150
+ possible_resolutions = grid_pinpoints if isinstance(grid_pinpoints, list) else ast.literal_eval(grid_pinpoints)
151
+
152
+ original_width, original_height = image_size
153
+ height, width = select_best_resolution((original_height, original_width), possible_resolutions)
154
+ return width // patch_size, height // patch_size
155
+
156
+
157
+ def reshape_and_unpad_image_features(
158
+ image_feature: torch.Tensor,
159
+ height: int,
160
+ width: int,
161
+ image_size: Tuple[int, int],
162
+ possible_resolutions: List[Tuple[int, int]],
163
+ grid_size: int,
164
+ unpad: bool,
165
+ image_newline: torch.Tensor,
166
+ ) -> torch.Tensor:
167
+ """Reshapes and processes image features with optional unpadding operation.
168
+
169
+ Processes input image features by:
170
+ 1. Separating base features from spatial features
171
+ 2. Reshaping spatial features into a 5D tensor (num_patch_height, num_patch_width, height, width, channels)
172
+ 3. Performing either unpadding operation or simple reshaping based on 'unpad' flag
173
+ 4. Concatenating processed features with base features
174
+
175
+ Args:
176
+ image_feature: Input tensor containing image features with shape
177
+ [1 + num_patches, feature_dim] where the first element is the base feature
178
+ height: Original image height in pixels
179
+ width: Original image width in pixels
180
+ image_size: Target image size as (width, height) tuple
181
+ possible_resolutions: List of possible [height, width] resolutions for multi-scale processing
182
+ grid_size: Grid dimension for patch arrangement
183
+ unpad: Flag to enable unpadding operation
184
+ image_newline: Special token tensor used as separator when unpadding
185
+
186
+ Returns:
187
+ torch.Tensor: Processed image features tensor with shape [1 + num_processed_patches, feature_dim]
188
+
189
+ Raises:
190
+ AssertionError: If base feature dimension doesn't match height*width
191
+ """
192
+ base_image_feature = image_feature[0]
193
+ image_feature = image_feature[1:]
194
+
195
+ assert (
196
+ height * width == base_image_feature.shape[0]
197
+ ), f"height: {height}, width: {width}, base_image_feature.shape[0]: {base_image_feature.shape[0]}"
198
+
199
+ num_patch_width, num_patch_height = get_anyres_image_grid_shape(image_size, possible_resolutions, grid_size)
200
+ image_feature = image_feature.view(num_patch_height, num_patch_width, height, width, -1)
201
+
202
+ if unpad:
203
+ image_feature = image_feature.permute(4, 0, 2, 1, 3).contiguous()
204
+ image_feature = image_feature.flatten(1, 2).flatten(2, 3)
205
+ image_feature = unpad_image(image_feature, image_size)
206
+ image_feature = torch.cat(
207
+ (
208
+ image_feature,
209
+ image_newline[:, None, None].expand(*image_feature.shape[:-1], 1).to(image_feature.device),
210
+ ),
211
+ dim=-1,
212
+ )
213
+ image_feature = image_feature.flatten(1, 2).transpose(0, 1)
214
+ else:
215
+ image_feature = image_feature.permute(0, 2, 1, 3, 4).contiguous()
216
+ image_feature = image_feature.flatten(0, 3)
217
+ image_feature = torch.cat((base_image_feature, image_feature), dim=0)
218
+
219
+ return image_feature
220
+
221
+
222
+ def anyres_postprocessing(
223
+ image_forward_outs: List[torch.FloatTensor],
224
+ image_sizes: List[List[int]],
225
+ possible_resolutions: List[Tuple[int, int]],
226
+ patch_size: int,
227
+ grid_size: int,
228
+ image_newline: torch.FloatTensor,
229
+ num_queries_vis_abstractor: int = -1,
230
+ unpad: bool = False,
231
+ ) -> List[torch.FloatTensor]:
232
+ """Processes 2D visual features into 1D sequences with post-processing steps.
233
+
234
+ Performs AnyRes postprocessing by flattening 2D visual features from grid partitions into 1D sequences, adding
235
+ newline embeddings at row boundaries for images, and optionally removing padding regions based on original image
236
+ sizes. For video data, processes each frame's features separately into a single sequence per video and disables
237
+ unpadding and newline insertion.
238
+
239
+ Args:
240
+ image_forward_outs (List[torch.FloatTensor]): List of input tensors with shape
241
+ (number_of_images_in_grid, total_patches, feature_dim) containing visual features.
242
+ split_sizes (List[int]): A list containing the number of patches for each sample in the batch. The sum of
243
+ `split_sizes` should equal `image_forward_outs.shape[0]`.
244
+ image_sizes (List[List[int]]): A list where each element is a list `[width, height]` representing the original
245
+ dimensions of the corresponding image sample. Used for unpadding.
246
+ possible_resolutions (List[Tuple[int, int]]): A list of supported resolution tuples `(height, width)` used by
247
+ `reshape_and_unpad_image_features` for spatial reconstruction, especially during unpadding.
248
+ patch_size (int): The spatial dimension (height and width) of the square patches the image was divided into.
249
+ grid_size (int): The spatial dimension (height and width) of the square grid onto which patches are mapped.
250
+ `grid_size` should be divisible by `patch_size`.
251
+ image_newline (torch.FloatTensor): A learnable tensor representing the newline embedding, typically with shape
252
+ (1, feature_dim). Added after each row of image patches when not unpadding.
253
+ num_queries_vis_abstractor (int, optional): If a visual abstractor with a fixed number of output queries is used
254
+ instead of grid patching, this specifies the number of queries. Must be a perfect square if > 0.
255
+ Defaults to -1 (indicating standard grid patching is used).
256
+ unpad (bool, optional): If `True`, removes padding tokens from image features based on `image_sizes` and
257
+ `possible_resolutions`. Does not apply to video features. Defaults to False.
258
+
259
+ Returns:
260
+ List[torch.FloatTensor]: A list of tensors, where each tensor represents the processed 1D sequence of visual
261
+ features for a single sample from the input batch. The length of the sequence varies depending on processing
262
+ (unpadding, newlines, video flattening).
263
+
264
+ Raises:
265
+ AssertionError: If `num_queries_vis_abstractor` is greater than 0 but not a perfect square.
266
+ """
267
+ height = width = grid_size // patch_size
268
+
269
+ if num_queries_vis_abstractor > 0:
270
+ assert (num_queries_vis_abstractor**0.5).is_integer(), "n_queries must be square number"
271
+ height = width = int(num_queries_vis_abstractor**0.5)
272
+
273
+ # post-processing (unpad, add newline)
274
+ new_image_features = []
275
+ for image_idx, image_feature in enumerate(image_forward_outs):
276
+ if image_feature.shape[0] > 1:
277
+ image_feature = reshape_and_unpad_image_features(
278
+ image_feature=image_feature,
279
+ height=height,
280
+ width=width,
281
+ image_size=image_sizes[image_idx],
282
+ possible_resolutions=possible_resolutions,
283
+ grid_size=grid_size, # Pass grid info if needed by helper
284
+ unpad=unpad,
285
+ image_newline=image_newline,
286
+ )
287
+ else:
288
+ image_feature = image_feature[0]
289
+ image_feature = torch.cat((image_feature, image_newline[None].to(image_feature.device)), dim=0)
290
+ new_image_features.append(image_feature)
291
+ image_features = new_image_features
292
+ return image_features
293
+
294
+
295
+ @dataclass
296
+ class HCXVisionOutput(ModelOutput):
297
+ """Output class for vision models, containing various computation results.
298
+
299
+ Args:
300
+ loss (Optional[torch.FloatTensor], optional): Total cross-entropy loss calculated from logits and labels.
301
+ loss_per_sample (Optional[torch.FloatTensor], optional): Per-sample loss values for advanced loss processing.
302
+ logits (torch.FloatTensor): Classification scores (before SoftMax) of shape (batch_size, num_classes).
303
+ past_key_values (Optional[Tuple[Tuple[torch.FloatTensor]]], optional): Contains precomputed hidden-states
304
+ that can be used (see `past_key_values` input) to speed up sequential decoding.
305
+ hidden_states (Optional[Tuple[torch.FloatTensor]], optional):
306
+ Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of
307
+ shape (batch_size, sequence_length, hidden_size).
308
+ Hidden-states of the model at the output of each layer plus the initial embedding outputs.
309
+ attentions (Optional[Tuple[torch.FloatTensor]], optional): Tuple of torch.FloatTensor (one for each layer)
310
+ of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention
311
+ softmax, used to compute the weighted average in the self-attention heads.
312
+ """
313
+
314
+ loss: Optional[torch.FloatTensor] = None
315
+ loss_per_sample: Optional[torch.FloatTensor] = None
316
+ logits: torch.FloatTensor = None
317
+ past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None
318
+ hidden_states: Optional[Tuple[torch.FloatTensor]] = None
319
+ attentions: Optional[Tuple[torch.FloatTensor]] = None
320
+
321
+
322
+ class HCXVisionForCausalLM(PreTrainedModel, GenerationMixin):
323
+ """HCX Vision model for causal language modeling with vision-language capabilities.
324
+
325
+ This class combines a vision model with a language model to create a multimodal model
326
+ capable of processing images or videos and generating text based on the visual inputs.
327
+
328
+ Attributes:
329
+ config_class: Configuration class for the model.
330
+ vision_model_name: Name of the vision model component.
331
+ _no_split_modules: List of modules that should not be split during parallel processing.
332
+ supports_gradient_checkpointing: Whether the model supports gradient checkpointing.
333
+ _skip_keys_device_placement: Keys to skip during device placement.
334
+ """
335
+
336
+ config_class = HCXVisionConfig
337
+ vision_model_name = "vision_model"
338
+ _no_split_modules = ["SiglipEncoderLayer", "LlamaDecoderLayer", "HyperCLOVAXDecoderLayer"]
339
+ supports_gradient_checkpointing = True
340
+ _skip_keys_device_placement = "past_key_values"
341
+ _supports_flash_attn_2 = True
342
+ _supports_sdpa = True
343
+
344
+ def __init__(
345
+ self,
346
+ config: HCXVisionConfig,
347
+ **kwargs: Optional[Any],
348
+ ) -> None:
349
+ """Initialize the HCXVisionForCausalLM model.
350
+
351
+ Args:
352
+ config: Configuration object for the model containing parameters for both
353
+ vision and language components.
354
+ **kwargs: Additional keyword arguments:
355
+ - use_liger: Whether to use liger kernel for hyperclovax models.
356
+ - use_fused_ce: Whether to use fused cross-entropy loss.
357
+ - use_sum_loss: Whether to use sum reduction for loss instead of mean.
358
+ - is_safetensor_save: Whether to save model using safetensors format.
359
+
360
+ Raises:
361
+ ValueError: If vision_config is not defined or if text_config is not defined.
362
+ """
363
+ super().__init__(config) # self.config = config
364
+
365
+ # init configs
366
+ text_config = self._init_text_config(config)
367
+ vision_config = self._init_vision_config(config)
368
+
369
+ ## possible_resolution should be matched with preprocessor_config.json
370
+ config.possible_resolutions = self._init_possible_resolutions(config, vision_config)
371
+
372
+ # init models & parameters
373
+ with no_init_weights(): # weight will be loaded in from_pretrained
374
+ self.vision_model = AutoModel.from_config(vision_config, trust_remote_code=True)
375
+
376
+ self.mm_projector = self._init_mm_projector(config, text_config, vision_config)
377
+
378
+ self.language_model = AutoModelForCausalLM.from_config(text_config)
379
+ self.lm_head_vocab_size = getattr(text_config, "padded_vocab_size", text_config.vocab_size)
380
+ self.language_model.lm_head = nn.Linear(text_config.hidden_size, self.lm_head_vocab_size, bias=False)
381
+
382
+ if config.anyres:
383
+ self.image_newline = nn.Parameter(torch.empty(text_config.hidden_size, dtype=self.dtype))
384
+
385
+ # modify configs or model settings
386
+ if text_config.model_type in ["llama", "hyperclovax", "gpt2"]:
387
+ self.language_model.gradient_checkpointing_enable()
388
+ if text_config.model_type == "hyperclovax" and self.use_liger:
389
+ self.language_model._get_apply_liger_kernel_converter()(model=self.language_model)
390
+
391
+ # update configs
392
+ self.vision_config = vision_config = self.vision_model.config
393
+ self.text_config = text_config = self.language_model.config
394
+ config.update({"vision_config": vision_config})
395
+ config.update({"text_config": text_config})
396
+
397
+ # etc
398
+ self.use_liger = kwargs.pop("use_liger", False)
399
+ self.use_fused_ce = kwargs.pop("use_fused_ce", False)
400
+ self.use_meansum_loss = kwargs.pop("use_meansum_loss", False)
401
+ self.freeze_before_sampler = kwargs.pop("freeze_before_sampler", False)
402
+ self.use_turnmeansum_loss = kwargs.pop("use_turnmeansum_loss", False)
403
+ self.vision_input_chunk_size = kwargs.pop("vision_input_chunk_size", None)
404
+ self.is_safetensor_save = kwargs.get("is_safetensor_save", True)
405
+
406
+ use_sum_loss = True if kwargs.pop("use_sum_loss", False) else False
407
+ self.reduction = self._init_reduction_type(use_sum_loss)
408
+
409
+ self.vision_model_use_no_grad = None # forward 시 체크 및 할당
410
+
411
+ self._backward_compatibility_gradient_checkpointing() # self.post_init() 에 포함되어 있는 gc 가능한지 확인하고 켜주는 함수
412
+
413
+ def _init_weights(self, module):
414
+ # copies from https://github.com/kakaobrain/honeybee/blob/main/honeybee/common_layers.py#L55
415
+ if (
416
+ isinstance(module, nn.Conv2d) # noqa: SIM101
417
+ or isinstance(module, nn.Embedding)
418
+ or isinstance(module, nn.Linear)
419
+ ):
420
+ module.weight.data.normal_(mean=0.0, std=0.02)
421
+ if hasattr(module, "bias") and module.bias is not None:
422
+ module.bias.data.zero_()
423
+
424
+ elif isinstance(module, nn.LayerNorm):
425
+ module.bias.data.zero_()
426
+ module.weight.data.fill_(1.0)
427
+ elif isinstance(module, nn.Parameter):
428
+ embed_std = 1 / torch.sqrt(torch.tensor(module.size(0), dtype=torch.float)).to(module.dtype)
429
+ module.data.normal_(mean=0.0, std=embed_std)
430
+
431
+ def _init_reduction_type(self, use_sum_loss):
432
+ assert not (
433
+ self.use_meansum_loss and self.use_turnmeansum_loss
434
+ ), "use_meansum_loss and use_turnmeansum_loss cannot both be True; only one or neither may be True."
435
+ if self.use_meansum_loss or self.use_turnmeansum_loss:
436
+ reduction = "none"
437
+ elif use_sum_loss:
438
+ reduction = "sum"
439
+ else:
440
+ reduction = "mean"
441
+ return reduction
442
+
443
+ def _init_vision_config(self, config):
444
+ vision_model_type = config.vision_config.model_type
445
+ if vision_model_type in CONFIG_MAPPING:
446
+ vision_config = CONFIG_MAPPING[vision_model_type](**config.vision_config.to_dict())
447
+ vision_config.auto_map = {}
448
+ else:
449
+ if config.vision_model_name_or_path is not None:
450
+ vision_config = AutoConfig.from_pretrained(config.vision_model_name_or_path, trust_remote_code=True)
451
+ elif config.vision_config._name_or_path is not None:
452
+ vision_config = AutoConfig.from_pretrained(config.vision_config._name_or_path, trust_remote_code=True)
453
+ else:
454
+ raise ValueError("vision_config is not defined")
455
+
456
+ vision_config.anyres = config.anyres
457
+ vision_config.max_num_grids = config.max_num_grids
458
+ return vision_config
459
+
460
+ def _init_text_config(self, config):
461
+ if hasattr(config, "text_config") and config.text_config is not None:
462
+ model_type = config.text_config.model_type
463
+ text_config = CONFIG_MAPPING[model_type](**config.text_config.to_dict())
464
+ else:
465
+ raise ValueError("text_config is not defined")
466
+ text_config._attn_implementation = config._attn_implementation
467
+ if text_config.model_type != "hyperclovax":
468
+ text_config.logits_scaling = 1.0
469
+ return text_config
470
+
471
+ def _init_possible_resolutions(self, config, vision_config):
472
+ """possible_resolution should be matched with preprocessor_config.json"""
473
+ if not getattr(config, "possible_resolutions", []):
474
+ possible_resolutions = []
475
+ if config.anyres:
476
+ assert config.max_num_grids > 0
477
+ for i in range(1, config.max_num_grids + 1):
478
+ for j in range(1, config.max_num_grids + 1):
479
+ if i == 1 and j == 1 and not config.use_1x1_grid:
480
+ continue
481
+ if i * j <= config.max_num_grids:
482
+ possible_resolutions.append([i, j])
483
+
484
+ possible_resolutions = [
485
+ [ys * vision_config.image_size, xs * vision_config.image_size] for ys, xs in possible_resolutions
486
+ ]
487
+ return possible_resolutions
488
+ else:
489
+ return config.possible_resolutions
490
+
491
+ def _init_mm_projector(self, config, text_config, vision_config):
492
+ input_hidden_size = vision_config.hidden_size
493
+ if config.mm_projector_type == "linear":
494
+ mm_projector = nn.Linear(input_hidden_size, text_config.hidden_size)
495
+ mm_projector.dtype = next(mm_projector.parameters()).dtype
496
+ elif config.mm_projector_type == "cabstractor":
497
+ mm_projector = HCXVisionCAbstractor(
498
+ num_queries=config.num_queries_vis_abstractor_image,
499
+ num_input_tokens=(vision_config.image_size // vision_config.patch_size) ** 2,
500
+ encoder_hidden_size=input_hidden_size,
501
+ hidden_size=input_hidden_size,
502
+ output_hidden_size=text_config.hidden_size,
503
+ pos_emb=config.proj_pos_emb,
504
+ prenorm=config.proj_prenorm,
505
+ )
506
+ else:
507
+ mm_projector = HCXVisionMlp(
508
+ config.mm_projector_type,
509
+ input_hidden_size,
510
+ hidden_features=input_hidden_size, # TODO: llava 처럼 hidden_size 를 input_hidden_size 가 아니라 LLM embedding size 로 바꿔주기
511
+ out_features=self.text_config.hidden_size,
512
+ )
513
+ return mm_projector
514
+
515
+ def forward(
516
+ self,
517
+ input_ids: Optional[torch.LongTensor] = None,
518
+ pixel_values_images: Optional[List[List[torch.FloatTensor]]] = None,
519
+ image_sizes_images: Optional[List[List[Tuple[int, int]]]] = None,
520
+ pixel_values_videos: Optional[List[List[torch.FloatTensor]]] = None,
521
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
522
+ attention_mask: Optional[torch.FloatTensor] = None,
523
+ position_ids: Optional[torch.LongTensor] = None,
524
+ inputs_embeds: Optional[torch.FloatTensor] = None,
525
+ labels: Optional[torch.LongTensor] = None,
526
+ use_cache: Optional[bool] = None,
527
+ output_attentions: Optional[bool] = None,
528
+ output_hidden_states: Optional[bool] = None,
529
+ return_dict: Optional[bool] = None,
530
+ **kwargs,
531
+ ) -> Union[Tuple, HCXVisionOutput]:
532
+ """Forward pass of the model.
533
+
534
+ This method processes the input tokens and images, combines them into a unified
535
+ representation, and generates text output based on the inputs.
536
+
537
+ Args:
538
+ input_ids: Input token IDs. In positions where images are inputted, the value is replaced by "<|dummy3|>"
539
+ pixel_values: List of lists of 4D tensors for images. Each outer list corresponds to a batch and contains
540
+ inner lists of image tensors.
541
+ past_key_values: Pre-computed key and value states of the attention layers for faster inference.
542
+ attention_mask: Mask to avoid performing attention on padding token indices.
543
+ inputs_embeds: Input embeddings. If provided, input_ids will not be used.
544
+ labels: Labels for computing the language modeling loss.
545
+ use_cache: Whether to use past key/values for faster inference.
546
+ output_attentions: Whether to return attention weights of each layer.
547
+ output_hidden_states: Whether to return hidden states of each layer.
548
+ return_dict: Whether to return a ModelOutput instead of a tuple.
549
+ image_sizes: List of lists representing image dimensions (width, height).
550
+ vision_query_lengths: List of lists containing lengths when each image is converted into visual tokens.
551
+ non_vision_query_lengths: List of lengths of text tokens (excluding visual tokens) for each sample.
552
+ img_start_ids_list: List of lists containing indices of img_start_id tokens for each sample.
553
+ num_queries_vis_abstractors: List of lists containing number of visual tokens for each image grid.\
554
+ For video frames, this is the number of visual tokens for the fast part.
555
+ num_queries_vis_abstractors_slow: List of lists containing number of visual tokens for
556
+ the slow part when applying the slowfast algorithm to video frames.
557
+ first_last_frames_slows: List of booleans indicating whether the slowfast algorithm is
558
+ applied to the first or last frames of the video.
559
+ is_video_list: List of booleans indicating which inputs are videos.
560
+ **kwargs: Additional keyword arguments.
561
+
562
+ Returns:
563
+ If return_dict=True, returns an HCXVisionOutput object containing:
564
+ - loss: Language modeling loss if labels are provided, otherwise None.
565
+ - loss_per_sample: Per-sample loss if labels are provided, otherwise None.
566
+ - logits: Prediction scores of the language modeling head.
567
+ - past_key_values: Past key/values for faster inference if use_cache=True.
568
+ - hidden_states: Hidden states of all layers if output_hidden_states=True.
569
+ - attentions: Attention weights of all layers if output_attentions=True.
570
+ If return_dict=False, returns a tuple containing the above items except loss_per_sample.
571
+ """
572
+ output_attentions = (
573
+ output_attentions if output_attentions is not None else self.config.vision_config.output_attentions
574
+ )
575
+ output_hidden_states = (
576
+ output_hidden_states if output_hidden_states is not None else self.config.vision_config.output_hidden_states
577
+ )
578
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
579
+
580
+ if inputs_embeds is None and past_key_values is None:
581
+ if pixel_values_images is not None or pixel_values_videos is not None:
582
+ inputs_embeds = self.extract_inputs_embeds(
583
+ input_ids=input_ids,
584
+ pixel_values_images=pixel_values_images,
585
+ image_sizes_images=image_sizes_images,
586
+ pixel_values_videos=pixel_values_videos,
587
+ )
588
+ else:
589
+ inputs_embeds = self.get_input_embeddings()(input_ids)
590
+
591
+ if inputs_embeds is not None:
592
+ input_ids = None
593
+
594
+ ################################
595
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
596
+ outputs = self.language_model.base_model(
597
+ input_ids=input_ids,
598
+ inputs_embeds=inputs_embeds,
599
+ attention_mask=attention_mask,
600
+ position_ids=position_ids,
601
+ past_key_values=past_key_values,
602
+ use_cache=use_cache,
603
+ output_attentions=output_attentions,
604
+ output_hidden_states=output_hidden_states,
605
+ return_dict=return_dict,
606
+ )
607
+
608
+ hidden_states = outputs[0]
609
+ hidden_states = hidden_states * self.text_config.logits_scaling
610
+
611
+ loss = None
612
+ loss_per_sample = None
613
+ logits = self.language_model.lm_head(hidden_states)
614
+ if labels is not None:
615
+ # Shift so that tokens < n predict n
616
+ shift_logits = logits[..., :-1, :].contiguous()
617
+ shift_labels = labels[..., 1:].contiguous()
618
+
619
+ # Flatten the tokens
620
+ loss_fct = CrossEntropyLoss(reduction="none") # ignore IGNORE_INDEX(-100)
621
+ shift_logits = shift_logits.view(-1, self.lm_head_vocab_size)
622
+ shift_labels = shift_labels.view(-1)
623
+
624
+ # Enable model/pipeline parallelism
625
+ shift_labels = shift_labels.to(shift_logits.device)
626
+ loss = loss_fct(shift_logits, shift_labels)
627
+ if get_rank() == 0:
628
+ loss_per_sample = loss.view(logits.shape[0], -1).sum(axis=1) / (
629
+ shift_labels.view(logits.shape[0], -1) != self.config.ignore_index
630
+ ).sum(axis=1)
631
+ loss = loss[shift_labels != self.config.ignore_index].mean()
632
+ if not return_dict:
633
+ output = (logits,) + outputs[1:]
634
+ return (loss,) + output if loss is not None else output
635
+
636
+ return HCXVisionOutput(
637
+ loss=loss,
638
+ loss_per_sample=loss_per_sample,
639
+ logits=logits,
640
+ past_key_values=outputs.past_key_values,
641
+ hidden_states=outputs.hidden_states,
642
+ attentions=outputs.attentions,
643
+ )
644
+
645
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.get_input_embeddings
646
+ def get_input_embeddings(self):
647
+ return self.language_model.get_input_embeddings()
648
+
649
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.set_input_embeddings
650
+ def set_input_embeddings(self, value):
651
+ self.language_model.set_input_embeddings(value)
652
+
653
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.get_output_embeddings
654
+ def get_output_embeddings(self):
655
+ return self.language_model.get_output_embeddings()
656
+
657
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.set_output_embeddings
658
+ def set_output_embeddings(self, new_embeddings):
659
+ self.language_model.set_output_embeddings(new_embeddings)
660
+
661
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.set_decoder
662
+ def set_decoder(self, decoder):
663
+ self.language_model.set_decoder(decoder)
664
+
665
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.get_decoder
666
+ def get_decoder(self):
667
+ return self.language_model.get_decoder()
668
+
669
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.tie_weights
670
+ def tie_weights(self):
671
+ return self.language_model.tie_weights()
672
+
673
+ # Copied from transformers.models.llava.modeling_llava.LlavaForConditionalGeneration.resize_token_embeddings
674
+ def resize_token_embeddings(self, new_num_tokens: Optional[int] = None, pad_to_multiple_of=None) -> nn.Embedding:
675
+ model_embeds = self.language_model.resize_token_embeddings(new_num_tokens, pad_to_multiple_of)
676
+ self.config.text_config.vocab_size = model_embeds.num_embeddings
677
+ self.vocab_size = model_embeds.num_embeddings
678
+ return model_embeds
679
+
680
+ def extract_inputs_embeds(
681
+ self,
682
+ input_ids: Optional[torch.LongTensor] = None,
683
+ pixel_values_images: Optional[List[List[torch.FloatTensor]]] = None,
684
+ image_sizes_images: Optional[List[List[Tuple[int, int]]]] = None,
685
+ pixel_values_videos: Optional[List[List[torch.FloatTensor]]] = None,
686
+ ):
687
+ """Extract input embeddings by processing text tokens and visual features.
688
+
689
+ This method processes the input tokens and image features, extracts the visual features
690
+ using the vision model, and combines them with the text token embeddings to create
691
+ a unified input representation for the language model.
692
+
693
+ Args:
694
+ input_ids: Input token IDs with img_start_id markers for image positions.
695
+ pixel_values: List of lists of image tensors.
696
+ past_key_values: Pre-computed key and value states for faster inference.
697
+ image_sizes: List of lists of image dimensions (width, height).
698
+ vision_query_lengths: List of lists of lengths when each image is converted to visual tokens.
699
+ non_vision_query_lengths: List of lengths of text tokens (excluding visual tokens) for each sample.
700
+ img_start_ids_list: List of lists containing indices of img_start_id tokens for each sample.
701
+ first_last_frames_slows: List of booleans indicating whether the slowfast algorithm is
702
+ applied to the first or last frames of the video.
703
+ is_videos: List of booleans indicating which inputs are videos.
704
+
705
+ Returns:
706
+ Combined embeddings of text tokens and visual features.
707
+ """
708
+ # for convert back to List of List format
709
+ len_pixel_values_images = [len(pixel_value) for pixel_value in pixel_values_images] if pixel_values_images else []
710
+ len_pixel_values_videos = [len(pixel_value) for pixel_value in pixel_values_videos] if pixel_values_videos else []
711
+
712
+ if sum(len_pixel_values_images) + sum(len_pixel_values_videos) == 0:
713
+ return None
714
+
715
+ inputs_embeds = self.get_input_embeddings()(input_ids)
716
+
717
+ if sum(len_pixel_values_images) > 0:
718
+ image_features_batch = self.forward_images(
719
+ pixel_values_images, image_sizes_images, len_pixel_values_images
720
+ )
721
+ for i, image_features in enumerate(image_features_batch):
722
+ if len(image_features) > 0:
723
+ image_token_indices = (input_ids[i] == self.config.image_token_id).nonzero().squeeze()
724
+ inputs_embeds[i][image_token_indices] = torch.cat(image_features).to(inputs_embeds.dtype)
725
+
726
+ if sum(len_pixel_values_videos) > 0:
727
+ video_features_batch = self.forward_videos(pixel_values_videos, len_pixel_values_videos)
728
+ for i, video_features in enumerate(video_features_batch):
729
+ if len(video_features) > 0:
730
+ video_token_indices = (input_ids[i] == self.config.video_token_id).nonzero().squeeze()
731
+ inputs_embeds[i][video_token_indices] = torch.cat(video_features).to(inputs_embeds.dtype)
732
+
733
+ return inputs_embeds
734
+
735
+ def forward_images(
736
+ self,
737
+ pixel_values_images: List[List[torch.FloatTensor]],
738
+ image_sizes_images: List[List[Tuple[int, int]]],
739
+ len_pixel_values_images: List[int],
740
+ ) -> List[List[torch.Tensor]]:
741
+ if sum(len_pixel_values_images) == 0:
742
+ return None
743
+
744
+ concat_pixel_values_images = torch.cat(list(chain(*pixel_values_images)), dim=0)
745
+
746
+ visual_token_idx = 0 if "siglip" in self.vision_config.model_type else 1
747
+ context_vision_model = torch.no_grad() if self.vision_model_use_no_grad else contextlib.nullcontext()
748
+ with context_vision_model:
749
+ if self.config.use_nth_layer == -1:
750
+ # Replace post_layernorm of the last layer with Identity
751
+ self.vision_model.vision_model.post_layernorm = nn.Identity()
752
+ image_forward_outs = self.vision_model(concat_pixel_values_images)
753
+ image_forward_outs = image_forward_outs.last_hidden_state[:, visual_token_idx:]
754
+ else:
755
+ image_forward_outs = self.vision_model(concat_pixel_values_images, output_hidden_states=True)
756
+ image_forward_outs = image_forward_outs.hidden_states[self.config.use_nth_layer][:, visual_token_idx:]
757
+
758
+ image_forward_outs = image_forward_outs.to(dtype=self.mm_projector.dtype)
759
+ image_forward_outs = self.mm_projector(image_forward_outs) # b (h w) d
760
+
761
+ # feature 를 분할. e.g. torch.Size([18, 81, 3072]) -> [torch.Size([9, 81, 3072]), torch.Size([9, 81, 3072])]
762
+ split_sizes = [pixel_value.shape[0] for pixel_value in chain(*pixel_values_images)]
763
+ image_forward_outs = torch.split(image_forward_outs, split_sizes, dim=0)
764
+
765
+ # newline 붙여주기 (anyres postprocessing)
766
+ image_features = anyres_postprocessing(
767
+ image_forward_outs=image_forward_outs,
768
+ image_sizes=[image_size for image_sizes in image_sizes_images for image_size in image_sizes],
769
+ num_queries_vis_abstractor=self.config.num_queries_vis_abstractor_image,
770
+ unpad=self.config.unpad,
771
+ patch_size=self.vision_config.patch_size,
772
+ grid_size=self.vision_config.image_size,
773
+ image_newline=self.image_newline,
774
+ possible_resolutions=self.config.possible_resolutions,
775
+ )
776
+
777
+ # 원래 pixel_values_images 형태로 복원
778
+ image_features = [
779
+ image_features[sum(len_pixel_values_images[:i]) : sum(len_pixel_values_images[: i + 1])]
780
+ for i in range(len(len_pixel_values_images))
781
+ ]
782
+
783
+ return image_features
784
+
785
+ def forward_videos(
786
+ self,
787
+ pixel_values_videos: List[List[torch.FloatTensor]],
788
+ len_pixel_values_videos: List[int],
789
+ ) -> List[torch.Tensor]:
790
+
791
+ len_video_grids = sum(len_pixel_values_videos)
792
+ if len_video_grids == 0:
793
+ return None
794
+
795
+ # Run Vision Model
796
+ concat_pixel_values_videos = torch.cat(list(chain(*pixel_values_videos)), dim=0)
797
+
798
+ visual_token_idx = 0 if "siglip" in self.vision_config.model_type else 1
799
+ context_vision_model = torch.no_grad() if self.vision_model_use_no_grad else contextlib.nullcontext()
800
+ with context_vision_model:
801
+ if self.config.use_nth_layer == -1:
802
+ # Replace post_layernorm of the last layer with Identity
803
+ self.vision_model.vision_model.post_layernorm = nn.Identity()
804
+ video_forward_outs = self.vision_model(concat_pixel_values_videos)
805
+ video_forward_outs = video_forward_outs.last_hidden_state[:, visual_token_idx:]
806
+ else:
807
+ video_forward_outs = self.vision_model(concat_pixel_values_videos, output_hidden_states=True)
808
+ video_forward_outs = video_forward_outs.hidden_states[self.config.use_nth_layer][:, visual_token_idx:]
809
+
810
+ video_forward_outs = video_forward_outs.to(dtype=self.mm_projector.dtype)
811
+
812
+ # Run MM-Projector
813
+ # len(num_grids) == len(num_queries_vis_abstractors) + 1
814
+ grid_idx = 0
815
+ num_grids = [grid_idx] # e.g. [0, 9, 18, 19, 27, 28, 36, 37, 45, 46, 54, 55, 56]
816
+ num_queries_vis_abstractors = [] # e.g. [81, 81, 81, 9, 81, 9, 81, 9, 81, 9, 81, 9]
817
+ len_total_frames = video_forward_outs.shape[0]
818
+
819
+ if self.config.first_last_frames_slow:
820
+ # TODO: 동작 확인 안 했음. 해야 함.
821
+ # slowfast (first_last_frames_slow)
822
+ assert len_total_frames != 0
823
+ if len_total_frames <= 2:
824
+ num_queries_vis_abstractors.append(self.config.num_queries_vis_abstractor_video_slow)
825
+ grid_idx += len_total_frames
826
+ num_grids.append(grid_idx)
827
+ else:
828
+ num_queries_vis_abstractors.append(self.config.num_queries_vis_abstractor_video_slow)
829
+ grid_idx += 1
830
+ num_grids.append(grid_idx)
831
+
832
+ num_queries_vis_abstractors.append(self.config.num_queries_vis_abstractor_video_fast)
833
+ grid_idx += len_total_frames - 2
834
+ num_grids.append(grid_idx)
835
+
836
+ num_queries_vis_abstractors.append(self.config.num_queries_vis_abstractor_video_slow)
837
+ grid_idx += 1
838
+ num_grids.append(grid_idx)
839
+ else:
840
+ # slowfast
841
+ for pixel_values_frames in pixel_values_videos:
842
+ for pixel_values_frame in pixel_values_frames:
843
+ if len(pixel_values_frame) > 0:
844
+ num_queries_vis_abstractors.append(self.config.num_queries_vis_abstractor_video_slow)
845
+ grid_idx += 1
846
+ num_grids.append(grid_idx)
847
+ num_queries_vis_abstractors.append(self.config.num_queries_vis_abstractor_video_fast)
848
+ grid_idx = grid_idx + len(pixel_values_frame) - 1
849
+ num_grids.append(grid_idx)
850
+
851
+ video_forward_outs = self.mm_projector(video_forward_outs, num_queries_vis_abstractors, num_grids)
852
+
853
+ # video_group 별로 concat 처리.
854
+ # 예를 들어, 3x3 grid 를 사용했을 경우, 총 9개의 feature 가 모일 때까지, grouped_features 에 리스트를 모아주고, concat 처리.
855
+ video_features = [] # what we want to return
856
+ target_features = []
857
+ target_group_size = 0
858
+ group_counter = 0
859
+ video_groups = [
860
+ len(frame) for frames in pixel_values_videos for frame in frames
861
+ ] # for concat video features after projector
862
+
863
+ for forward_out in video_forward_outs:
864
+ target_group_size += len(forward_out)
865
+ target_features.append(forward_out.flatten(0, 1))
866
+
867
+ video_group_size = video_groups[group_counter]
868
+ if video_group_size == target_group_size:
869
+ video_features.append(torch.cat(target_features, dim=0))
870
+ target_features = []
871
+ group_counter += 1
872
+ target_group_size = 0
873
+
874
+ elif video_group_size < target_group_size:
875
+ raise RuntimeError(f"video_group_size < target_group_size!! [{video_group_size} < {target_group_size}]")
876
+
877
+ assert len(target_features) == 0, f"target_features is not empty!! {target_features}"
878
+ assert len(video_groups) == len(video_features)
879
+
880
+ # 원래 pixel_values_videos 형태로 복원
881
+ video_features = [
882
+ video_features[sum(len_pixel_values_videos[:i]) : sum(len_pixel_values_videos[: i + 1])]
883
+ for i in range(len(len_pixel_values_videos))
884
+ ]
885
+
886
+ return video_features
887
+
888
+ @torch.no_grad()
889
+ def generate(
890
+ self,
891
+ input_ids: Optional[torch.LongTensor] = None,
892
+ pixel_values_images: Optional[List[List[torch.FloatTensor]]] = None,
893
+ image_sizes_images: Optional[List[List[Tuple[int, int]]]] = None,
894
+ pixel_values_videos: Optional[List[List[torch.FloatTensor]]] = None,
895
+ pad_token_id: Optional[int] = None,
896
+ eos_token_id: Optional[int] = None,
897
+ bad_words_ids: Optional[List[List[int]]] = None,
898
+ max_length: int = 196,
899
+ min_length: int = 2,
900
+ do_sample: bool = True,
901
+ num_beams: int = 1,
902
+ top_p: float = 0.6,
903
+ top_k: int = 0,
904
+ temperature: float = 0.5,
905
+ repetition_penalty: float = 1.0,
906
+ length_penalty: int = 1,
907
+ use_cache: bool = True,
908
+ verbose: bool = False,
909
+ **kwargs,
910
+ ) -> torch.LongTensor:
911
+ """Generate text based on input tokens and images.
912
+
913
+ This method generates text based on the provided input tokens and images using
914
+ beam search and/or sampling strategies.
915
+
916
+ Args:
917
+ input_ids: Input token IDs with img_start_id markers for image positions.
918
+ pixel_values: List of lists of image tensors.
919
+ image_sizes: List of lists of image dimensions (width, height).
920
+ vision_query_lengths: List of lists of lengths when each image is converted to visual tokens.
921
+ non_vision_query_lengths: List of lengths of text tokens (excluding visual tokens) for each sample.
922
+ num_queries_vis_abstractors: List of lists containing number of visual tokens for each image grid.
923
+ num_queries_vis_abstractors_slow: List of lists containing number of visual tokens for the slow part when
924
+ applying the slowfast algorithm to video frames.
925
+ first_last_frames_slows: List of booleans indicating whether the slowfast algorithm is applied to the first
926
+ or last frames of the video.
927
+ is_videos: List of booleans indicating which inputs are videos.
928
+ img_start_ids_list: List of lists containing indices of img_start_id tokens for each sample.
929
+ pad_token_id: Token ID used for padding.
930
+ eos_token_id: Token ID used to signal the end of a sequence.
931
+ bad_words_ids: List of token ID sequences that should not be generated.
932
+ max_length: Maximum length of the sequence to be generated (input length + max_new_tokens).
933
+ min_length: Minimum length of the sequence to be generated (input length + min_new_tokens).
934
+ do_sample: Whether to use sampling for generation (otherwise uses greedy decoding).
935
+ num_beams: Number of beams for beam search. 1 means no beam search.
936
+ top_p: Nucleus sampling parameter. Tokens with cumulative probability > top_p are kept.
937
+ top_k: Number of highest probability tokens to keep for top-k-filtering.
938
+ temperature: Value used to modulate the next token probabilities.
939
+ repetition_penalty: Penalty applied to tokens that have already appeared in the sequence.
940
+ length_penalty: Exponential penalty applied to sequence length.
941
+ use_cache: Whether to use past key/values for faster inference.
942
+ **kwargs: Additional keyword arguments.
943
+
944
+ Returns:
945
+ Generated token IDs.
946
+ """
947
+ # inputs_embeds: torch.bfloat16 : [batchsize, variable(visual token, text token, system prompt 모두 포함)]
948
+ if pad_token_id is None:
949
+ pad_token_id = self.tokenizer.pad_token_id
950
+ if eos_token_id is None:
951
+ eos_token_id = self.tokenizer.encode("<|endofturn|>")[0]
952
+ if bad_words_ids is None:
953
+ bad_words_ids = [
954
+ [
955
+ self.config.text_config.bos_token_id,
956
+ ],
957
+ [
958
+ self.config.text_config.eos_token_id,
959
+ ],
960
+ ]
961
+
962
+ if (pixel_values_images is None or all(len(pixel_values) == 0 for pixel_values in pixel_values_images)) and (
963
+ pixel_values_videos is None or all(len(pixel_values) == 0 for pixel_values in pixel_values_videos)
964
+ ):
965
+ return self.language_model.generate(
966
+ input_ids, pad_token_id=pad_token_id, eos_token_id=eos_token_id, bad_words_ids=bad_words_ids, **kwargs
967
+ )
968
+
969
+ inputs_embeds = self.extract_inputs_embeds(
970
+ input_ids=input_ids,
971
+ pixel_values_images=pixel_values_images,
972
+ image_sizes_images=image_sizes_images,
973
+ pixel_values_videos=pixel_values_videos,
974
+ )
975
+
976
+ inputs_embeds = inputs_embeds.to(device=self.language_model.device, dtype=self.language_model.dtype)
977
+
978
+ # pred : torch.int64 : [batchsize, generated token_length]
979
+ pred = self.language_model.generate(
980
+ inputs_embeds=inputs_embeds,
981
+ pad_token_id=pad_token_id,
982
+ eos_token_id=eos_token_id,
983
+ bad_words_ids=bad_words_ids,
984
+ max_new_tokens=max_length,
985
+ min_length=min_length,
986
+ num_beams=num_beams,
987
+ do_sample=(False if temperature == 0.0 else do_sample), # set do_sample=False if invalid temperature
988
+ top_k=top_k,
989
+ top_p=top_p,
990
+ temperature=temperature,
991
+ repetition_penalty=repetition_penalty,
992
+ length_penalty=length_penalty,
993
+ early_stopping=(False if num_beams <= 1 else True), # set early_stopping=False when not beam_search
994
+ use_cache=use_cache,
995
+ )
996
+
997
+ if verbose:
998
+ llm_query = self.tokenizer.batch_decode(
999
+ [
1000
+ [token_id for token_id in input_ids_row if token_id != self.tokenizer.pad_token_id]
1001
+ for input_ids_row in input_ids.detach().cpu().tolist()
1002
+ ],
1003
+ skip_special_tokens=False,
1004
+ )[0]
1005
+ llm_pred = self.tokenizer.batch_decode(
1006
+ [
1007
+ [token_id for token_id in pred_row if token_id != self.tokenizer.pad_token_id]
1008
+ for pred_row in pred.detach().cpu().tolist()
1009
+ ],
1010
+ skip_special_tokens=False,
1011
+ )[0]
1012
+ print(f"# [info] llm_query: {llm_query}")
1013
+ print(f"# [info] llm_pred: {llm_pred}")
1014
+
1015
+ return pred
1016
+
1017
+ def to_vision_model_device(self, input_tensor: Union[torch.Tensor, List]) -> Union[torch.Tensor, List]:
1018
+ """Move input tensors to the vision model's device.
1019
+ This method recursively moves input tensors or lists of tensors to the vision model's device.
1020
+
1021
+ Args:
1022
+ input_tensor: Input tensor or list of tensors to be moved to the vision model's device.
1023
+
1024
+ Returns:
1025
+ The input tensor or list of tensors moved to the vision model's device.
1026
+
1027
+ Raises:
1028
+ TypeError: If the input is neither a tensor nor a list.
1029
+ """
1030
+ if isinstance(input_tensor, list):
1031
+ return [self.to_vision_model_device(item) for item in input_tensor]
1032
+ elif isinstance(input_tensor, torch.Tensor):
1033
+ return input_tensor.to(self.vision_model.device)
1034
+ else:
1035
+ raise TypeError("Unsupported data type. Only tensors and lists are allowed.")
1036
+
1037
+ def prepare_inputs_for_generation(
1038
+ self,
1039
+ input_ids: torch.LongTensor,
1040
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
1041
+ attention_mask: Optional[torch.FloatTensor] = None,
1042
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1043
+ **kwargs,
1044
+ ) -> Dict[str, Any]:
1045
+ """Prepare inputs for the generation algorithm.
1046
+
1047
+ This method prepares the input for each generation step based on the model's needs.
1048
+
1049
+ Args:
1050
+ input_ids: Input token IDs.
1051
+ past_key_values: Pre-computed key and value states for faster inference.
1052
+ attention_mask: Mask to avoid performing attention on padding token indices.
1053
+ inputs_embeds: Input embeddings. If provided, input_ids will not be used.
1054
+ **kwargs: Additional keyword arguments.
1055
+
1056
+ Returns:
1057
+ Dictionary containing the prepared inputs for the model.
1058
+ """
1059
+ input_ids = kwargs.get("decoder_input_ids", input_ids)
1060
+
1061
+ if past_key_values:
1062
+ input_ids = input_ids[:, -1:]
1063
+
1064
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1065
+ if inputs_embeds is not None and past_key_values is None:
1066
+ model_inputs = {"inputs_embeds": inputs_embeds}
1067
+ else:
1068
+ model_inputs = {"input_ids": input_ids}
1069
+
1070
+ model_inputs.update(
1071
+ {
1072
+ "past_key_values": past_key_values,
1073
+ "use_cache": kwargs.get("use_cache"),
1074
+ "attention_mask": attention_mask,
1075
+ "pixel_values": kwargs.get("pixel_values", None),
1076
+ }
1077
+ )
1078
+ return model_inputs
1079
+
1080
+ @classmethod
1081
+ def from_config(cls, config, vision_model_name_or_path):
1082
+ return cls(config, vision_model_name_or_path)
1083
+
1084
+ @classmethod
1085
+ def from_pretrained(
1086
+ cls,
1087
+ pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] = None,
1088
+ *model_args,
1089
+ **kwargs,
1090
+ ) -> "HCXVisionForCausalLM":
1091
+ assert pretrained_model_name_or_path is not None
1092
+
1093
+ save_only_vision = kwargs.pop("save_only_vision") if "save_only_vision" in kwargs else False
1094
+ save_only_qformer = kwargs.pop("save_only_qformer") if "save_only_qformer" in kwargs else False
1095
+ save_shard_size = kwargs.pop("save_shard_size") if "save_shard_size" in kwargs else "5GB"
1096
+
1097
+ if pretrained_model_name_or_path is not None: # when evaluate or load instruction tunned model
1098
+ model: HCXVisionForCausalLM = super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
1099
+ model.tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path)
1100
+
1101
+ image_token_id = model.tokenizer.encode(IMAGE_LOC, add_special_tokens=False)
1102
+ assert (
1103
+ len(image_token_id) == 1
1104
+ ), f'"<|dummy3|>" was not encoded into a single special token. Encoding result: {image_token_id}'
1105
+ model.config.image_token_id = image_token_id[0]
1106
+
1107
+ video_token_id = model.tokenizer.encode(VIDEO_LOC, add_special_tokens=False)
1108
+ assert (
1109
+ len(video_token_id) == 1
1110
+ ), f'"<|_unuse_missing_100270|>" was not encoded into a single special token. Encoding result: {video_token_id}'
1111
+ model.config.video_token_id = video_token_id[0]
1112
+
1113
+ model.save_only_vision = save_only_vision
1114
+ model.save_only_qformer = save_only_qformer
1115
+ model.save_shard_size = save_shard_size
1116
+
1117
+ return model
1118
+
1119
+ def get_language_model(self):
1120
+ return self.language_model.base_model
1121
+
1122
+ def get_vision_model(self):
1123
+ return self.vision_model
1124
+
1125
+ def save_pretrained(
1126
+ self,
1127
+ save_directory: Union[str, os.PathLike],
1128
+ *args,
1129
+ **kwargs,
1130
+ ):
1131
+ state_dict = kwargs["state_dict"] if "state_dict" in kwargs else self.state_dict()
1132
+ partial_state_dict = self.get_pretrained_state_dict(
1133
+ state_dict,
1134
+ save_directory,
1135
+ )
1136
+ kwargs["state_dict"] = partial_state_dict
1137
+ kwargs["safe_serialization"] = self.is_safetensor_save
1138
+ kwargs.setdefault("max_shard_size", self.save_shard_size)
1139
+ super().save_pretrained(save_directory, *args, **kwargs)
1140
+
1141
+ def get_pretrained_state_dict(self, state_dict, save_dir):
1142
+ vision_key = "vision_model."
1143
+ llm_keys = ["language_model."]
1144
+ head_key = "lm_head."
1145
+
1146
+ for key in list(state_dict.keys()):
1147
+ if self.save_only_vision:
1148
+ for llm_key in llm_keys:
1149
+ if llm_key in key:
1150
+ state_dict.pop(key)
1151
+ if key.startswith(head_key):
1152
+ state_dict.pop(key)
1153
+
1154
+ elif self.save_only_qformer:
1155
+ if f"{vision_key}" in key:
1156
+ state_dict.pop(key)
1157
+
1158
+ return state_dict
1159
+
1160
+
1161
+
1162
+ class HCXVisionMlp(nn.Module):
1163
+ def __init__(
1164
+ self,
1165
+ mm_projector_type,
1166
+ in_features,
1167
+ hidden_features=None,
1168
+ out_features=None,
1169
+ act_layer=nn.GELU,
1170
+ ):
1171
+ super().__init__()
1172
+ out_features = out_features or in_features
1173
+ hidden_features = hidden_features or in_features
1174
+ self.mm_projector_type = mm_projector_type
1175
+ if self.mm_projector_type == "mlp":
1176
+ self.fc1 = nn.Linear(in_features, hidden_features)
1177
+ self.act = act_layer()
1178
+ self.fc2 = nn.Linear(hidden_features, out_features)
1179
+ elif self.mm_projector_type == "inverted_mlp":
1180
+ self.fc1 = nn.Linear(in_features, 2 * hidden_features)
1181
+ self.act = act_layer()
1182
+ self.fc2 = nn.Linear(2 * hidden_features, out_features)
1183
+ else:
1184
+ raise NotImplementedError("{} is not implemented".format(self.mm_projector_type))
1185
+
1186
+ def forward(self, x):
1187
+ x = self.fc1(x)
1188
+ x = self.act(x)
1189
+ x = self.fc2(x)
1190
+ return x
1191
+
1192
+
1193
+ class HCXVisionCAbstractor(nn.Module):
1194
+ """
1195
+ This module is based on C-Abstractor, whose license is under apache-2.0.
1196
+ You can check the original code at https://github.com/khanrc/honeybee/blob/main/honeybee/projectors/projectors.py
1197
+ and we made necessary modifications.
1198
+ """
1199
+
1200
+ def __init__(
1201
+ self,
1202
+ num_queries: int,
1203
+ num_input_tokens: int,
1204
+ encoder_hidden_size: int,
1205
+ hidden_size: int,
1206
+ output_hidden_size: int,
1207
+ pos_emb: bool = True,
1208
+ prenorm: bool = False,
1209
+ ):
1210
+ super().__init__()
1211
+ self.num_input_tokens = num_input_tokens
1212
+ self.output_hidden_size = output_hidden_size
1213
+
1214
+ # Positional embedding
1215
+ if pos_emb:
1216
+ self.pos_emb = torch.nn.Parameter(torch.zeros(1, num_input_tokens, encoder_hidden_size))
1217
+ self.pos_emb.data.normal_(mean=0.0, std=0.02)
1218
+ else:
1219
+ self.pos_emb = None
1220
+
1221
+ # (Optional) Pre-normalization layer
1222
+ if prenorm:
1223
+ self.prenorm = LayerNorm(encoder_hidden_size)
1224
+ else:
1225
+ self.prenorm = None
1226
+
1227
+ self.build_net(num_queries, encoder_hidden_size, hidden_size, output_hidden_size)
1228
+ self.dtype = next(self.parameters()).dtype
1229
+
1230
+ def forward(
1231
+ self,
1232
+ x: torch.Tensor,
1233
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1234
+ num_grids: Optional[List[int]] = None,
1235
+ ) -> torch.Tensor:
1236
+ """
1237
+ Args:
1238
+ x: (B, L, encoder_hidden_size) tensor from the visual backbone (e.g. CLIP visual encoder), including cls token.
1239
+ """
1240
+ if self.prenorm is not None:
1241
+ x = self.prenorm(x)
1242
+
1243
+ if self.pos_emb is not None:
1244
+ x = x + self.pos_emb
1245
+
1246
+ x = self._forward(
1247
+ x,
1248
+ num_queries_vis_abstractors=num_queries_vis_abstractors,
1249
+ num_grids=num_grids,
1250
+ ) # (B, L, output_hidden_size)
1251
+
1252
+ return x
1253
+
1254
+ def _forward(
1255
+ self,
1256
+ x: torch.Tensor,
1257
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1258
+ num_grids: Optional[List[int]] = None,
1259
+ ) -> torch.Tensor:
1260
+ # x: [B, L, dim]
1261
+ B, L, dim = x.shape
1262
+ hw = int(L**0.5)
1263
+ x = rearrange(x, "b (h w) d -> b d h w", h=hw, w=hw)
1264
+
1265
+ if num_queries_vis_abstractors is not None:
1266
+ assert num_grids is not None
1267
+ return self._forward_adaptive_num_query(x, num_queries_vis_abstractors, num_grids)
1268
+
1269
+ x = self.net(x)
1270
+ x = rearrange(x, "b d h w -> b (h w) d")
1271
+ x = self.readout(x)
1272
+ return x
1273
+
1274
+ def _forward_adaptive_num_query(
1275
+ self,
1276
+ x: torch.Tensor,
1277
+ num_queries_vis_abstractors: Optional[List[List[int]]] = None,
1278
+ num_grids: Optional[List[int]] = None,
1279
+ ) -> List[torch.Tensor]:
1280
+ # self.net is consisted by 3 layers (s1, sampler, s2)
1281
+ assert len(self.net) == 3
1282
+
1283
+ x = self.net[0](x) # s1
1284
+ new_x = []
1285
+ for i, num_queries in enumerate(num_queries_vis_abstractors):
1286
+ hw = int(num_queries**0.5)
1287
+ sampler = nn.AdaptiveAvgPool2d((hw, hw))
1288
+ out = sampler(x[num_grids[i] : num_grids[i + 1], :])
1289
+ out = self.net[2](out) # s2
1290
+
1291
+ out = rearrange(out, "b d h w -> b (h w) d")
1292
+ out = self.readout(out)
1293
+
1294
+ new_x.append(out)
1295
+ return new_x
1296
+
1297
+ def build_net(
1298
+ self,
1299
+ n_queries: int,
1300
+ encoder_hidden_size: int,
1301
+ hidden_size: int,
1302
+ output_hidden_size: int,
1303
+ depth: int = 3,
1304
+ mlp_depth: int = 2,
1305
+ ):
1306
+ assert (n_queries**0.5).is_integer(), f"n_queries must be square number. n_queries: {n_queries}"
1307
+ hw = int(n_queries**0.5)
1308
+
1309
+ # RegBlock = ResBlock + SE
1310
+ RegBlock = partial(
1311
+ RegStage,
1312
+ stride=1,
1313
+ dilation=1,
1314
+ act_layer=nn.SiLU,
1315
+ norm_layer=LayerNorm2d,
1316
+ )
1317
+
1318
+ s1 = RegBlock(
1319
+ depth,
1320
+ encoder_hidden_size,
1321
+ hidden_size,
1322
+ )
1323
+ sampler = nn.AdaptiveAvgPool2d((hw, hw))
1324
+ s2 = RegBlock(
1325
+ depth,
1326
+ hidden_size,
1327
+ hidden_size,
1328
+ )
1329
+
1330
+ self.net = nn.Sequential(s1, sampler, s2)
1331
+ self.readout = self.build_mlp(mlp_depth, hidden_size, output_hidden_size)
1332
+
1333
+ def build_mlp(
1334
+ self,
1335
+ depth: int,
1336
+ hidden_size: int,
1337
+ output_hidden_size: int,
1338
+ ):
1339
+ layers = [nn.Linear(hidden_size, output_hidden_size)]
1340
+ for _ in range(1, depth):
1341
+ layers.append(nn.SiLU())
1342
+ layers.append(nn.Linear(output_hidden_size, output_hidden_size))
1343
+ return nn.Sequential(*layers)
1344
+
preprocessor_config.json ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "anyres": true,
3
+ "auto_map": {
4
+ "AutoImageProcessor": "image_processing_hyperclovax.HCXImageProcessor",
5
+ "AutoProcessor": "processing_hyperclovax.HCXProcessor"
6
+ },
7
+ "crop_size": {
8
+ "height": 378,
9
+ "width": 378
10
+ },
11
+ "do_center_crop": true,
12
+ "do_convert_rgb": true,
13
+ "do_normalize": true,
14
+ "do_rescale": true,
15
+ "do_resize": true,
16
+ "image_mean": [
17
+ 0.5,
18
+ 0.5,
19
+ 0.5
20
+ ],
21
+ "image_processor_class": "AutoImageProcessor",
22
+ "image_processor_type": "HCXImageProcessor",
23
+ "image_std": [
24
+ 0.5,
25
+ 0.5,
26
+ 0.5
27
+ ],
28
+ "num_queries_vis_abstractor_image": 81,
29
+ "num_queries_vis_abstractor_video_slow": 81,
30
+ "num_queries_vis_abstractor_video_fast": 9,
31
+ "first_last_frames_slow_video": false,
32
+ "pad_to_square": true,
33
+ "patch_size": 14,
34
+ "possible_resolutions": [
35
+ [
36
+ 378,
37
+ 378
38
+ ],
39
+ [
40
+ 378,
41
+ 756
42
+ ],
43
+ [
44
+ 378,
45
+ 1134
46
+ ],
47
+ [
48
+ 378,
49
+ 1512
50
+ ],
51
+ [
52
+ 378,
53
+ 1890
54
+ ],
55
+ [
56
+ 378,
57
+ 2268
58
+ ],
59
+ [
60
+ 378,
61
+ 2646
62
+ ],
63
+ [
64
+ 378,
65
+ 3024
66
+ ],
67
+ [
68
+ 378,
69
+ 3402
70
+ ],
71
+ [
72
+ 756,
73
+ 378
74
+ ],
75
+ [
76
+ 756,
77
+ 756
78
+ ],
79
+ [
80
+ 756,
81
+ 1134
82
+ ],
83
+ [
84
+ 756,
85
+ 1512
86
+ ],
87
+ [
88
+ 1134,
89
+ 378
90
+ ],
91
+ [
92
+ 1134,
93
+ 756
94
+ ],
95
+ [
96
+ 1134,
97
+ 1134
98
+ ],
99
+ [
100
+ 1512,
101
+ 378
102
+ ],
103
+ [
104
+ 1512,
105
+ 756
106
+ ],
107
+ [
108
+ 1890,
109
+ 378
110
+ ],
111
+ [
112
+ 2268,
113
+ 378
114
+ ],
115
+ [
116
+ 2646,
117
+ 378
118
+ ],
119
+ [
120
+ 3024,
121
+ 378
122
+ ],
123
+ [
124
+ 3402,
125
+ 378
126
+ ]
127
+ ],
128
+ "processor_class": "HCXProcessor",
129
+ "resample": 2,
130
+ "rescale_factor": 0.00392156862745098,
131
+ "size": {
132
+ "shortest_edge": 378
133
+ },
134
+ "unpad": true
135
+ }
processing_hyperclovax.py ADDED
@@ -0,0 +1,912 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import copy
2
+ import os
3
+ import re
4
+ import uuid
5
+ from typing import Dict, List, Optional, Union
6
+
7
+ import numpy as np
8
+ import PIL
9
+ from PIL import Image
10
+ import torch
11
+ from transformers.feature_extraction_utils import BatchFeature
12
+ from transformers.image_utils import ImageInput, load_image
13
+ from transformers.processing_utils import (
14
+ AllKwargsForChatTemplate,
15
+ ChatTemplateLoadKwargs,
16
+ ProcessingKwargs,
17
+ ProcessorMixin,
18
+ Unpack,
19
+ )
20
+ from transformers.tokenization_utils_base import AudioInput, TextInput
21
+ from transformers.utils import (
22
+ is_torch_device,
23
+ is_torch_dtype,
24
+ logging,
25
+ requires_backends,
26
+ )
27
+ from transformers.utils.chat_template_utils import render_jinja_template
28
+ from transformers.video_utils import VideoInput, VideoMetadata, load_video
29
+
30
+ logger = logging.get_logger(__name__)
31
+
32
+
33
+ class HCXBatchFeature(BatchFeature):
34
+ def to(self, *args, **kwargs) -> "BatchFeature":
35
+ """
36
+ Send all values to device by calling `v.to(*args, **kwargs)` (PyTorch only). This should support casting in
37
+ different `dtypes` and sending the `BatchFeature` to a different `device`.
38
+
39
+ Args:
40
+ args (`Tuple`):
41
+ Will be passed to the `to(...)` function of the tensors.
42
+ kwargs (`Dict`, *optional*):
43
+ Will be passed to the `to(...)` function of the tensors.
44
+ To enable asynchronous data transfer, set the `non_blocking` flag in `kwargs` (defaults to `False`).
45
+
46
+ Returns:
47
+ [`BatchFeature`]: The same instance after modification.
48
+ """
49
+ requires_backends(self, ["torch"])
50
+ import torch # noqa
51
+
52
+ new_data = {}
53
+ device = kwargs.get("device")
54
+ non_blocking = kwargs.get("non_blocking", False)
55
+ # Check if the args are a device or a dtype
56
+ if device is None and len(args) > 0:
57
+ # device should be always the first argument
58
+ arg = args[0]
59
+ if is_torch_dtype(arg):
60
+ # The first argument is a dtype
61
+ pass
62
+ elif isinstance(arg, str) or is_torch_device(arg) or isinstance(arg, int):
63
+ device = arg
64
+ else:
65
+ # it's something else
66
+ raise ValueError(f"Attempting to cast a BatchFeature to type {str(arg)}. This is not supported.")
67
+ # We cast only floating point tensors to avoid issues with tokenizers casting `LongTensor` to `FloatTensor`
68
+ for k, v in self.items():
69
+ # check if v is a floating point
70
+ if isinstance(v, torch.Tensor) and torch.is_floating_point(v):
71
+ # cast and send to device
72
+ new_data[k] = v.to(*args, **kwargs)
73
+ elif isinstance(v, torch.Tensor) and device is not None:
74
+ new_data[k] = v.to(device=device, non_blocking=non_blocking)
75
+ elif "pixel_values" in k:
76
+ new_pixel_values_batch = []
77
+ for _v in v:
78
+ pixel_values = [pixel_value.to(device=device, non_blocking=non_blocking) for pixel_value in _v]
79
+ new_pixel_values_batch.append(pixel_values)
80
+ new_data[k] = new_pixel_values_batch
81
+ else:
82
+ new_data[k] = v
83
+ self.data = new_data
84
+ return self
85
+
86
+
87
+ class HCXProcessorKwargs(ProcessingKwargs, total=False):
88
+ _defaults = {
89
+ "text_kwargs": {
90
+ "return_tensors": "pt",
91
+ "calc_non_vision_query_lengths": False,
92
+ },
93
+ "images_kwargs": {},
94
+ "audio_kwargs": {},
95
+ "videos_kwargs": {
96
+ "max_image_cnt": 12,
97
+ "max_num_grids": 9,
98
+ },
99
+ }
100
+
101
+
102
+ class HCXProcessor(ProcessorMixin):
103
+ attributes = ["image_processor", "tokenizer"]
104
+ valid_kwargs = ["chat_template"]
105
+
106
+ image_processor_class = "AutoImageProcessor"
107
+ tokenizer_class = ("GPT2Tokenizer", "GPT2TokenizerFast")
108
+
109
+ def __init__(self, image_processor=None, tokenizer=None, chat_template=None, **kwargs):
110
+ self.image_token = "<|dummy3|>"
111
+ self.video_token = "<|_unuse_missing_100270|>"
112
+ self.image_token_pattern = re.compile(r"<\|dummy3\|>")
113
+ self.video_token_pattern = re.compile(r"<\|_unuse_missing_100270\|>")
114
+ self.image_video_token_pattern = re.compile(r"<\|dummy3\|>|<\|_unuse_missing_100270\|>")
115
+ self.image_token_id = (
116
+ tokenizer.image_token_id
117
+ if getattr(tokenizer, "image_token_id", None)
118
+ else tokenizer.convert_tokens_to_ids(self.image_token)
119
+ )
120
+ self.video_token_id = (
121
+ tokenizer.video_token_id
122
+ if getattr(tokenizer, "video_token_id", None)
123
+ else tokenizer.convert_tokens_to_ids(self.video_token)
124
+ )
125
+ super().__init__(image_processor, tokenizer, chat_template=chat_template)
126
+
127
+ def apply_chat_template(
128
+ self,
129
+ conversation: Union[list[dict[str, str]], list[list[dict[str, str]]]],
130
+ chat_template: Optional[str] = None,
131
+ **kwargs: Unpack[AllKwargsForChatTemplate],
132
+ ) -> str:
133
+ """
134
+ Similar to the `apply_chat_template` method on tokenizers, this method applies a Jinja template to input
135
+ conversations to turn them into a single tokenizable string.
136
+
137
+ The input is expected to be in the following format, where each message content is a list consisting of text and
138
+ optionally image or video inputs. One can also provide an image, video, URL or local path which will be used to form
139
+ `pixel_values` when `return_dict=True`. If not provided, one will get only the formatted text, optionally tokenized text.
140
+
141
+ conversation = [
142
+ {
143
+ "role": "user",
144
+ "content": [
145
+ {"type": "image", "image": "https://www.ilankelman.org/stopsigns/australia.jpg"},
146
+ {"type": "text", "text": "Please describe this image in detail."},
147
+ ],
148
+ },
149
+ ]
150
+
151
+ Args:
152
+ conversation (`Union[List[Dict, [str, str]], List[List[Dict[str, str]]]]`):
153
+ The conversation to format.
154
+ chat_template (`Optional[str]`, *optional*):
155
+ The Jinja template to use for formatting the conversation. If not provided, the tokenizer's
156
+ chat template is used.
157
+ """
158
+
159
+ if chat_template is None:
160
+ if isinstance(self.chat_template, dict) and "default" in self.chat_template:
161
+ chat_template = self.chat_template["default"]
162
+ elif isinstance(self.chat_template, dict):
163
+ raise ValueError(
164
+ 'The processor has multiple chat templates but none of them are named "default". You need to specify'
165
+ " which one to use by passing the `chat_template` argument. Available templates are: "
166
+ f"{', '.join(self.chat_template.keys())}"
167
+ )
168
+ elif self.chat_template is not None:
169
+ chat_template = self.chat_template
170
+ else:
171
+ raise ValueError(
172
+ "Cannot use apply_chat_template because this processor does not have a chat template."
173
+ )
174
+ else:
175
+ if isinstance(self.chat_template, dict) and chat_template in self.chat_template:
176
+ # It's the name of a template, not a full template string
177
+ chat_template = self.chat_template[chat_template]
178
+ else:
179
+ # It's a template string, render it directly
180
+ chat_template = chat_template
181
+
182
+ if kwargs.get("continue_final_message", False):
183
+ if kwargs.get("add_generation_prompt", False):
184
+ raise ValueError(
185
+ "continue_final_message and add_generation_prompt are not compatible. Use continue_final_message when you want the model to continue the final message, and add_generation_prompt when you want to add a header that will prompt it to start a new assistant message instead."
186
+ )
187
+ if kwargs.get("return_assistant_tokens_mask", False):
188
+ raise ValueError("continue_final_message is not compatible with return_assistant_tokens_mask.")
189
+
190
+ # Fill sets of kwargs that should be used by different parts of template
191
+ processed_kwargs = {
192
+ "mm_load_kwargs": {},
193
+ "template_kwargs": {},
194
+ }
195
+
196
+ for kwarg_type in processed_kwargs:
197
+ for key in AllKwargsForChatTemplate.__annotations__[kwarg_type].__annotations__.keys():
198
+ kwarg_type_defaults = AllKwargsForChatTemplate.__annotations__[kwarg_type]
199
+ default_value = getattr(kwarg_type_defaults, key, None)
200
+ value = kwargs.pop(key, default_value)
201
+ if value is not None and not isinstance(value, dict):
202
+ processed_kwargs[kwarg_type][key] = value
203
+
204
+ # Pass unprocessed custom kwargs
205
+ processed_kwargs["template_kwargs"].update(kwargs)
206
+
207
+ if isinstance(conversation, (list, tuple)) and (
208
+ isinstance(conversation[0], (list, tuple)) or hasattr(conversation[0], "content")
209
+ ):
210
+ is_batched = True
211
+ conversations = conversation
212
+ else:
213
+ is_batched = False
214
+ conversations = [conversation]
215
+
216
+ tokenize = processed_kwargs["template_kwargs"].pop("tokenize", False)
217
+ return_dict = processed_kwargs["template_kwargs"].pop("return_dict", False)
218
+ mm_load_kwargs = processed_kwargs["mm_load_kwargs"]
219
+
220
+ if tokenize:
221
+ batch_images, batch_videos = [], []
222
+ batch_audios = []
223
+ batch_video_metadata = []
224
+ for conversation in conversations:
225
+ images, videos = [], []
226
+ video_metadata = []
227
+ for message in conversation:
228
+ visuals = [content for content in message["content"] if content["type"] in ["image", "video"]]
229
+ audio_fnames = [
230
+ content[key]
231
+ for content in message["content"]
232
+ for key in ["audio", "url", "path"]
233
+ if key in content and content["type"] == "audio"
234
+ ]
235
+ image_fnames = [
236
+ vision_info[key]
237
+ for vision_info in visuals
238
+ for key in ["image", "url", "path", "base64"]
239
+ if key in vision_info and vision_info["type"] == "image"
240
+ ]
241
+ video_fnames = [
242
+ vision_info[key]
243
+ for vision_info in visuals
244
+ for key in ["video", "url", "path"]
245
+ if key in vision_info and vision_info["type"] == "video"
246
+ ]
247
+
248
+ for fname in image_fnames:
249
+ images.append(load_image(fname))
250
+
251
+ # Audio models do not accept nested list of audios (yet!) so we construct a flat input audio list
252
+ if not mm_load_kwargs["load_audio_from_video"]:
253
+ for fname in audio_fnames:
254
+ batch_audios.append(load_audio(fname, sampling_rate=mm_load_kwargs["sampling_rate"]))
255
+ else:
256
+ for fname in video_fnames:
257
+ batch_audios.append(load_audio(fname, sampling_rate=mm_load_kwargs["sampling_rate"]))
258
+
259
+ for fname in video_fnames:
260
+ if isinstance(fname, (list, tuple)) and isinstance(fname[0], str):
261
+ video = [np.array(load_image(image_fname)) for image_fname in fname]
262
+ # create a 4D video because `load_video` always returns a 4D array
263
+ video = np.stack(video)
264
+ metadata = None
265
+ logger.warning(
266
+ "When loading the video from list of images, we cannot infer metadata such as `fps` or `duration`. "
267
+ "If your model uses this metadata during processing, please load the whole video and let the model sample frames instead."
268
+ )
269
+ else:
270
+ # TODO: raushan, should be `self.video_processor.load_video_for_model` when API is added
271
+ video, metadata = self._load_video_for_model(
272
+ fname,
273
+ num_frames=mm_load_kwargs.get("num_frames", None),
274
+ fps=mm_load_kwargs.get("video_fps", None),
275
+ backend=mm_load_kwargs["video_load_backend"],
276
+ **kwargs,
277
+ )
278
+ videos.append(video)
279
+ video_metadata.append(metadata)
280
+
281
+ # Currently all processors can accept nested list of batches, but not flat list of visuals
282
+ # So we'll make a batched list of images and let the processor handle it
283
+ if images:
284
+ batch_images.append(images)
285
+ if videos:
286
+ batch_videos.append(videos)
287
+ batch_video_metadata.append(video_metadata)
288
+
289
+ # Process conversation with video/image information if needed. Then convert into a prompt using Jinja template
290
+ conversations = self._process_messages_for_chat_template(
291
+ conversations,
292
+ batch_images=batch_images,
293
+ batch_videos=batch_videos,
294
+ batch_video_metadata=batch_video_metadata,
295
+ **processed_kwargs["mm_load_kwargs"],
296
+ )
297
+
298
+ prompt, generation_indices = render_jinja_template(
299
+ conversations=conversations,
300
+ chat_template=chat_template,
301
+ **processed_kwargs["template_kwargs"], # different flags such as `return_assistant_mask`
302
+ **self.tokenizer.special_tokens_map, # tokenizer special tokens are used by some templates
303
+ )
304
+
305
+ if not is_batched:
306
+ prompt = prompt[0]
307
+
308
+ if tokenize:
309
+ # Tokenizer's `apply_chat_template` never adds special tokens when tokenizing
310
+ # But processor's `apply_chat_template` didn't have an option to tokenize, so users had to format the prompt
311
+ # and pass it to the processor. Users thus never worried about special tokens relying on processor handling
312
+ # everything internally. The below line is to keep BC for that and be able to work with model that have
313
+ # special tokens in the template (consistent with tokenizers). We dont want to raise warning, it will flood command line
314
+ # without actionable solution for users
315
+ single_prompt = prompt[0] if is_batched else prompt
316
+ if self.tokenizer.bos_token is not None and single_prompt.startswith(self.tokenizer.bos_token):
317
+ kwargs["add_special_tokens"] = False
318
+
319
+ out = self(
320
+ text=prompt,
321
+ images=batch_images if batch_images else None,
322
+ videos=batch_videos if batch_videos else None,
323
+ audio=batch_audios if batch_audios else None,
324
+ **kwargs,
325
+ )
326
+ if return_dict:
327
+ if processed_kwargs["template_kwargs"].get("return_assistant_tokens_mask", False):
328
+ assistant_masks = []
329
+ input_ids = out["input_ids"]
330
+ for i in range(len(input_ids)):
331
+ current_mask = [0] * len(input_ids[i])
332
+ for assistant_start_char, assistant_end_char in generation_indices[i]:
333
+ start_token = out.char_to_token(i, assistant_start_char)
334
+ end_token = out.char_to_token(i, assistant_end_char - 1)
335
+ if start_token is None:
336
+ # start_token is out of bounds maybe due to truncation.
337
+ break
338
+ for token_id in range(start_token, end_token + 1 if end_token else len(input_ids[i])):
339
+ current_mask[token_id] = 1
340
+ assistant_masks.append(current_mask)
341
+ out["assistant_masks"] = assistant_masks
342
+ out.convert_to_tensors(tensor_type=kwargs.get("return_tensors", None))
343
+
344
+ # vllm needs vision_query_lengths, but hf model doesn't need it
345
+ del out["vision_query_lengths_images"]
346
+ del out["vision_query_lengths_videos"]
347
+ return out
348
+ else:
349
+ return out["input_ids"]
350
+
351
+ def repeat_dummy_tokens(self, input_ids, target_token_id, vision_query_lengths):
352
+ input_ids = input_ids.clone().detach()
353
+ batch_indices, target_indices = torch.where(input_ids == target_token_id)
354
+ batch_size = input_ids.shape[0]
355
+
356
+ new_input_ids = [[] for _ in range(batch_size)]
357
+ start_indices = [0 for _ in range(batch_size)]
358
+ counter = [0 for _ in range(batch_size)]
359
+ for batch_idx, target_idx in zip(batch_indices, target_indices):
360
+ start_idx = start_indices[batch_idx]
361
+ new_input_ids[batch_idx].append(input_ids[batch_idx][start_idx:target_idx])
362
+ query_length = vision_query_lengths[batch_idx][counter[batch_idx]]
363
+ new_input_ids[batch_idx].append(input_ids[batch_idx][target_idx].repeat(query_length))
364
+ start_indices[batch_idx] = target_idx + 1
365
+ counter[batch_idx] += 1
366
+
367
+ for batch_idx in range(batch_size):
368
+ start_idx = start_indices[batch_idx]
369
+ new_input_ids[batch_idx].append(input_ids[batch_idx][start_idx:]) # append remaining tokens
370
+ new_input_ids[batch_idx] = torch.cat(new_input_ids[batch_idx], dim=0)
371
+
372
+ new_input_ids = torch.stack(new_input_ids)
373
+ return new_input_ids
374
+
375
+ def _load_video_for_model(
376
+ self,
377
+ video: str,
378
+ num_frames: Optional[int] = None,
379
+ fps: Optional[int] = None,
380
+ backend: str = "opencv",
381
+ **kwargs: Unpack[HCXProcessorKwargs],
382
+ ) -> List[ImageInput]:
383
+ """
384
+ Overrided function.
385
+
386
+ Loads `video` to a List[PIL.Image] (llava style)
387
+
388
+ Args:
389
+ video (`str`):
390
+ The video to convert to the numpy array format. Can be a link to video or local path.
391
+ num_frames (`int`, *optional*):
392
+ Number of frames to sample uniformly. If not passed, the whole video is loaded.
393
+ fps (`int`, *optional*):
394
+ Number of frames to sample per second. Should be passed only when `num_frames=None`.
395
+ If not specified and `num_frames==None`, all frames are sampled.
396
+ backend (`str`, *optional*, defaults to `"opencv"`):
397
+ The backend to use when loading the video. Can be any of ["decord", "pyav", "opencv", "torchvision"]. Defaults to "opencv".
398
+
399
+ Returns:
400
+ Tuple[`np.array`, Dict]: A tuple containing:
401
+ - List[PIL.Image] of frames in RGB.
402
+ - Metadata dictionary.
403
+ """
404
+ output_kwargs = self._merge_kwargs(
405
+ HCXProcessorKwargs,
406
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
407
+ **kwargs,
408
+ )
409
+
410
+ logger.warning_once(f"num_frames control via argument is not supported yet. Ignored num_frames: {num_frames}.")
411
+ logger.warning_once(f"fps control via argument is not supported yet. Ignored fps: {fps}.")
412
+ logger.warning_once(f"backend control via argument is not supported yet. Ignored backend: {backend}.")
413
+
414
+ # video_loaded, video_metadata = load_video(
415
+ # video, backend="decord", num_frames=32
416
+ # )
417
+ # frame_interval = int(video_metadata.total_num_frames / 32)
418
+ # time_interval = frame_interval / video_metadata.fps
419
+ # video_metadata.time_interval = time_interval
420
+
421
+ def _hcx_sample_indices_fn(metadata: VideoMetadata, num_frames=None, fps=None, **kwargs):
422
+ max_num_grids = output_kwargs["videos_kwargs"]["max_num_grids"]
423
+ max_image_cnt = output_kwargs["videos_kwargs"]["max_image_cnt"]
424
+ frame_indices, time_interval = extract_frame_indices(
425
+ metadata.duration,
426
+ metadata.total_num_frames,
427
+ metadata.fps,
428
+ max_num_grids,
429
+ max_image_cnt,
430
+ default_interval=0.4,
431
+ )
432
+ metadata.time_interval = time_interval
433
+ return np.array(frame_indices)
434
+
435
+ video_loaded, video_metadata = None, None
436
+ for backend in ["decord", "pyav", "opencv", "torchvision"]:
437
+ try:
438
+ video_loaded, video_metadata = load_video(
439
+ video, sample_indices_fn=_hcx_sample_indices_fn, backend=backend
440
+ )
441
+ break
442
+ except Exception as e:
443
+ logger.error(f"Error loading video with {backend} backend: {e}")
444
+ continue
445
+
446
+ assert video_loaded is not None, "Failed to load video with any backend"
447
+
448
+ return video_loaded, video_metadata
449
+
450
+ def _process_messages_for_chat_template(
451
+ self,
452
+ conversation: List[List[Dict[str, str]]],
453
+ batch_images: List[List[ImageInput]],
454
+ batch_videos: List[List[VideoInput]],
455
+ batch_video_metadata: List[List[Dict[str, any]]],
456
+ **mm_load_kwargs: Unpack[ChatTemplateLoadKwargs],
457
+ ):
458
+ """
459
+ Overrided function.
460
+ Used within `apply_chat_template` when a model has a special way to process conversation history. For example,
461
+ video models might want to specify in the prompt the duration of video or which frame indices at which timestamps
462
+ were sampled. This information cannot be accessed before the video is loaded.
463
+
464
+ For most models it is a no-op, and must be overridden by model processors which require special processing.
465
+
466
+ Args:
467
+ conversation (`List[Dict, str, str]`):
468
+ The conversation to process. Always comes in batched format.
469
+ batch_images (`List[List[ImageInput]]`):
470
+ Batch of images that were loaded from url/path defined in the conversation. The images
471
+ are ordered in the same way as in the conversation. Comes in nested list format, one list of `PIL` images
472
+ per batch.
473
+ batch_videos (`List[List[ImageInput]]`):
474
+ Batch of videos that were loaded from url/path defined in the conversation. The videos
475
+ are ordered in the same way as in the conversation. Comes in nested list format, one list of `PIL.Image`
476
+ per batch.
477
+ batch_video_metadata (`List[List[Dict[[str, any]]]]`):
478
+ Batch of metadata returned from loading videos. That includes video fps, duration and total number of framer in original video.
479
+ Metadata are ordered in the same way as `batch_videos`. Comes in nested list format, one list of `Dict`
480
+ per batch.
481
+ """
482
+
483
+ is_video_in_conversation = False
484
+ for batch_idx, messages in enumerate(conversation):
485
+ is_video_in_messages = False
486
+ is_image_in_messages = False
487
+ for message in messages:
488
+ for content in message["content"]:
489
+ if content["type"] == "video":
490
+ is_video_in_messages = True
491
+ elif content["type"] == "image":
492
+ is_image_in_messages = True
493
+ if not is_video_in_messages:
494
+ batch_videos.insert(batch_idx, [])
495
+ batch_video_metadata.insert(batch_idx, [])
496
+ if not is_image_in_messages:
497
+ batch_images.insert(batch_idx, [])
498
+
499
+ is_video_in_conversation = is_video_in_conversation or is_video_in_messages
500
+
501
+ if not is_video_in_conversation:
502
+ return conversation
503
+
504
+ # conversation processing
505
+ new_conversation = []
506
+ for batch_idx, messages in enumerate(conversation):
507
+ video_counter = 0
508
+ new_messages = []
509
+
510
+ for message in messages:
511
+ new_message = {
512
+ "role": message["role"],
513
+ "content": [],
514
+ }
515
+ for content in message["content"]:
516
+ if content["type"] == "video":
517
+ video = batch_videos[batch_idx][video_counter]
518
+ video_meta = batch_video_metadata[batch_idx][video_counter]
519
+
520
+ time_stamps = calc_timestamp_video_grids(video, video_meta.time_interval, max_grid_shape=(3, 3))
521
+ video_counter += 1
522
+
523
+ if "filename" in content:
524
+ filename = content["filename"]
525
+ else:
526
+ filename = content["video"].split("/")[-1]
527
+ if len(filename) > 50:
528
+ filename = f"{uuid.uuid4().hex}.mp4"
529
+ basename, ext = os.path.splitext(filename)
530
+ if ext == "":
531
+ ext = ".mp4"
532
+
533
+ for frame_idx, time_stamp in enumerate(time_stamps):
534
+ if frame_idx == len(video) - 1:
535
+ # final_grid
536
+ new_content = {
537
+ "filename": f"{basename}-{frame_idx}{ext}",
538
+ "video": content["video"],
539
+ "type": "video",
540
+ "video_time_stamp": time_stamp,
541
+ "lens_keywords": content["lens_keywords"],
542
+ "lens_local_keywords": content["lens_local_keywords"],
543
+ "speech_to_text": content["speech_to_text"],
544
+ "is_final_grid": True,
545
+ }
546
+ new_message["content"].append(new_content)
547
+ else:
548
+ new_content = {
549
+ "filename": f"{basename}-{frame_idx}{ext}",
550
+ "video": content["video"],
551
+ "type": "video",
552
+ "video_time_stamp": time_stamp,
553
+ }
554
+ new_message["content"].append(new_content)
555
+ else:
556
+ new_message["content"].append(copy.deepcopy(content))
557
+ new_messages.append(new_message)
558
+ new_conversation.append(new_messages)
559
+
560
+ return new_conversation
561
+
562
+ def __call__(
563
+ self,
564
+ text: TextInput = None,
565
+ images: List[List[ImageInput]] = None,
566
+ videos: List[List[VideoInput]] = None,
567
+ audio: AudioInput = None,
568
+ **kwargs: Unpack[HCXProcessorKwargs],
569
+ ):
570
+ output_kwargs = self._merge_kwargs(
571
+ HCXProcessorKwargs,
572
+ tokenizer_init_kwargs=self.tokenizer.init_kwargs,
573
+ **kwargs,
574
+ )
575
+
576
+ # prepare model inputs
577
+ mm_inputs = {
578
+ "pixel_values_images": [],
579
+ "image_sizes_images": [],
580
+ "vision_query_lengths_images": [],
581
+ "pixel_values_videos": [],
582
+ # "image_sizes_videos": [],
583
+ "vision_query_lengths_videos": [],
584
+ }
585
+ calc_non_vision_query_lengths = output_kwargs["text_kwargs"].pop("calc_non_vision_query_lengths")
586
+ if calc_non_vision_query_lengths:
587
+ mm_inputs["non_vision_query_lengths"] = []
588
+
589
+ # video processing
590
+ if videos is not None:
591
+ vit_input_size = self.image_processor.crop_size["width"]
592
+
593
+ video_kwargs = copy.deepcopy(output_kwargs["videos_kwargs"])
594
+
595
+ for videos_in_single_conversation in videos:
596
+ pixel_values_videos = []
597
+ vision_query_lengths_videos = []
598
+
599
+ for video_frames in videos_in_single_conversation:
600
+ if len(video_frames) == 0:
601
+ mm_inputs["pixel_values_videos"].append([])
602
+ mm_inputs["vision_query_lengths_videos"].append([])
603
+ continue
604
+ video_frames_combined = combine_frames_into_images(
605
+ video_frames, max_grid_shape=(3, 3), vit_input_size=vit_input_size
606
+ )
607
+ video_kwargs["is_video"] = True
608
+ video_kwargs["return_tensors"] = None
609
+
610
+ frames_processed = self.image_processor(images=video_frames_combined, **video_kwargs)
611
+ sizes = [(size["width"], size["height"]) for size in frames_processed["image_sizes"]]
612
+
613
+ pixel_values_videos.extend(frames_processed["pixel_values"])
614
+ vision_query_lengths_videos.extend(frames_processed["vision_query_lengths"])
615
+
616
+ mm_inputs["pixel_values_videos"].append(pixel_values_videos)
617
+ mm_inputs["vision_query_lengths_videos"].append(vision_query_lengths_videos)
618
+
619
+ # image processing
620
+ if images is not None:
621
+ image_kwargs = copy.deepcopy(output_kwargs["images_kwargs"])
622
+ image_kwargs["is_video"] = False
623
+ image_kwargs["return_tensors"] = None
624
+
625
+ for images_in_single_conversation in images:
626
+ if isinstance(images_in_single_conversation, PIL.Image.Image): # single item to batch
627
+ images_in_single_conversation = [images_in_single_conversation, ]
628
+ if len(images_in_single_conversation) == 0:
629
+ mm_inputs["pixel_values_images"].append([])
630
+ mm_inputs["image_sizes_images"].append([])
631
+ mm_inputs["vision_query_lengths_images"].append([])
632
+ continue
633
+ images_processed = self.image_processor(images=images_in_single_conversation, **image_kwargs)
634
+ sizes = [(size["width"], size["height"]) for size in images_processed["image_sizes"]]
635
+
636
+ mm_inputs["pixel_values_images"].append(images_processed["pixel_values"])
637
+ mm_inputs["image_sizes_images"].append(sizes)
638
+ mm_inputs["vision_query_lengths_images"].append(images_processed["vision_query_lengths"])
639
+
640
+ # text processing
641
+ def _create_replacer(_target_token, _replacements):
642
+ _iterator = iter(_replacements)
643
+
644
+ def _replacer(match_obj):
645
+ # return self.image_token
646
+ num_query_tokens = next(_iterator)
647
+ return "".join([_target_token for _ in range(num_query_tokens)])
648
+ return _replacer
649
+
650
+ text_inputs = {}
651
+ if text is not None:
652
+ if not isinstance(text, list):
653
+ text = [text]
654
+
655
+ if images is not None:
656
+ new_texts = []
657
+ for batch_idx, text_in_single_conversation in enumerate(text):
658
+ new_text = self.image_token_pattern.sub(
659
+ _create_replacer(self.image_token, mm_inputs["vision_query_lengths_images"][batch_idx]),
660
+ text_in_single_conversation,
661
+ )
662
+ new_texts.append(new_text)
663
+ text = new_texts
664
+
665
+ if videos is not None:
666
+ new_texts = []
667
+ for batch_idx, text_in_single_conversation in enumerate(text):
668
+ new_text = self.video_token_pattern.sub(
669
+ _create_replacer(self.video_token, mm_inputs["vision_query_lengths_videos"][batch_idx]),
670
+ text_in_single_conversation,
671
+ )
672
+ new_texts.append(new_text)
673
+ text = new_texts
674
+
675
+ text_inputs = self.tokenizer(text, **output_kwargs["text_kwargs"])
676
+
677
+ # audio processing
678
+ if audio is not None:
679
+ raise NotImplementedError("Audio processing is not supported yet.")
680
+
681
+ return HCXBatchFeature(data={**text_inputs, **mm_inputs})
682
+
683
+ def decode(self, *args, **kwargs):
684
+ """
685
+ This method forwards all its arguments to Siglip2Tokenizer's [`~PreTrainedTokenizer.decode`]. Please refer to
686
+ the docstring of this method for more information.
687
+ """
688
+ return self.tokenizer.decode(*args, **kwargs)
689
+
690
+ def batch_decode(self, *args, **kwargs):
691
+ """
692
+ This method forwards all its arguments to Siglip2Tokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please
693
+ refer to the docstring of this method for more information.
694
+ """
695
+ return self.tokenizer.batch_decode(*args, **kwargs)
696
+
697
+ def post_process_image_text_to_text(
698
+ self, generated_outputs, skip_special_tokens=True, clean_up_tokenization_spaces=False, **kwargs
699
+ ):
700
+ """
701
+ Post-process the output of the model to decode the text.
702
+
703
+ Args:
704
+ generated_outputs (`torch.Tensor` or `np.ndarray`):
705
+ The output of the model `generate` function. The output is expected to be a tensor of shape `(batch_size, sequence_length)`
706
+ or `(sequence_length,)`.
707
+ skip_special_tokens (`bool`, *optional*, defaults to `True`):
708
+ Whether or not to remove special tokens in the output. Argument passed to the tokenizer's `batch_decode` method.
709
+ Clean_up_tokenization_spaces (`bool`, *optional*, defaults to `False`):
710
+ Whether or not to clean up the tokenization spaces. Argument passed to the tokenizer's `batch_decode` method.
711
+ **kwargs:
712
+ Additional arguments to be passed to the tokenizer's `batch_decode method`.
713
+
714
+ Returns:
715
+ `List[str]`: The decoded text.
716
+ """
717
+ return self.tokenizer.batch_decode(
718
+ generated_outputs,
719
+ skip_special_tokens=skip_special_tokens,
720
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
721
+ **kwargs,
722
+ )
723
+
724
+ @property
725
+ def model_input_names(self):
726
+ tokenizer_input_names = self.tokenizer.model_input_names
727
+ image_processor_input_names = self.image_processor.model_input_names
728
+ names_from_processor = list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
729
+ return names_from_processor + []
730
+
731
+
732
+ def extract_frame_indices(play_time, total_frames, fps, max_num_grids, max_image_cnt, default_interval=0.4):
733
+ """
734
+ Extracts specific frame indices from a video based on duration, frame count, and sampling strategy.
735
+
736
+ The function determines which frames to extract given the video duration (`play_time`),
737
+ total frame count, and frame rate. It samples frames at regular intervals (default: 0.4s),
738
+ but if the number of frames exceeds the limit defined by `max_num_grids * max_image_cnt`,
739
+ it performs uniform sampling to stay within that limit.
740
+
741
+ Args:
742
+ play_time (float): Total play time of the video in seconds.
743
+ total_frames (int): Total number of frames in the video.
744
+ fps (float): Frames per second of the video.
745
+ max_num_grids (int): Maximum number of grids to display.
746
+ max_image_cnt (int): Maximum number of images per grid.
747
+ default_interval (float, optional): Interval in seconds between frame samples. Defaults to 0.4.
748
+
749
+ Returns:
750
+ Tuple:
751
+ frame_indices (List[int]): A list of selected frame indices.
752
+ time_interval (float): Time interval between selected frames (in seconds).
753
+ """
754
+
755
+ # Calculate how many frames to extract with the default interval
756
+ default_frame_count = int(play_time / default_interval)
757
+
758
+ # Maximum frames allowed based on max_num_grids and max_image_cnt
759
+ max_frames_allowed = max_num_grids * max_image_cnt
760
+
761
+ # Determine whether we can use the default interval or need uniform sampling
762
+ if default_frame_count <= max_frames_allowed:
763
+ # Default interval is sufficient, extract frames every 0.4 seconds
764
+ frame_interval = int(total_frames / default_frame_count)
765
+ else:
766
+ # Use uniform sampling to fit within max_frames_allowed
767
+ frame_interval = int(total_frames / max_frames_allowed)
768
+
769
+ # Extract frame indices at the calculated interval
770
+ selected_indices = list(range(0, total_frames, frame_interval))
771
+
772
+ time_interval = frame_interval / fps
773
+
774
+ # Ensure the number of selected indices does not exceed max_frames_allowed
775
+ return selected_indices[:max_frames_allowed], time_interval
776
+
777
+
778
+ def calc_timestamp_video_grids(frames, time_interval, max_grid_shape=(3, 3)):
779
+ """
780
+ Calculates the time range labels for each grid in a video.
781
+
782
+ Args:
783
+ frames (List[PIL.Image.Image]): A list of frames extracted from a video.
784
+ time_interval (float): Time interval (in seconds) between consecutive frames.
785
+ max_grid_shape (Tuple[int, int], optional): The maximum grid shape as (rows, cols). Defaults to (3, 3).
786
+ vit_input_size (int, optional): The target size (height and width) for the Vision Transformer input. Defaults to 378.
787
+
788
+ Returns:
789
+ Tuple:
790
+ image_time_stamps (List[str]): A list of time span labels for each combined image,
791
+ e.g., ["0.00s~1.50s", "1.50s~3.00s", ...].
792
+ """
793
+ max_num_grids = max_grid_shape[0] * max_grid_shape[1]
794
+ # assert (
795
+ # max_grid_shape[1] == 1
796
+ # ), f"For video processing, decided to concatenate frames horizontally into a wide image."
797
+
798
+ # Calculate the number of canvases needed.
799
+ num_frames = len(frames)
800
+ num_canvases = num_frames // max_num_grids
801
+ leftover_frames = num_frames % max_num_grids
802
+
803
+ time_stamp = 0 # second
804
+ image_time_stamps = []
805
+
806
+ for canvas_idx in range(num_canvases):
807
+ # Determine the frames to fill in the current canvas.
808
+ start_idx = canvas_idx * max_num_grids
809
+ end_idx = min(start_idx + max_num_grids, num_frames)
810
+
811
+ # Append the current canvas to the result list.
812
+ frame_cnt = end_idx - start_idx
813
+ image_time_stamps.append(f"{time_stamp:.2f}s~{time_stamp + frame_cnt * time_interval:.2f}s")
814
+ time_stamp += frame_cnt * time_interval
815
+
816
+ if leftover_frames > 0:
817
+ # Add the current canvas to the list of combined images.
818
+ frame_cnt = leftover_frames
819
+ image_time_stamps.append(f"{time_stamp:.2f}s~{time_stamp + frame_cnt * time_interval:.2f}s")
820
+ time_stamp += frame_cnt * time_interval
821
+
822
+ return image_time_stamps
823
+
824
+
825
+ def combine_frames_into_images(frames, max_grid_shape=(3, 3), vit_input_size=378):
826
+ """
827
+ Combines a sequence of video frames into grid-based images and generates corresponding time range labels.
828
+
829
+ Frames are grouped and arranged into a grid (e.g., 3x3) such that each combined image contains up to
830
+ `max_grid_shape[0] * max_grid_shape[1]` frames. Each combined image is resized to the given ViT input size.
831
+
832
+ Args:
833
+ frames (NDArray): (num_frames, H, W, C) shape. A list of frames extracted from a video.
834
+ time_interval (float): Time interval (in seconds) between consecutive frames.
835
+ max_grid_shape (Tuple[int, int], optional): The maximum grid shape as (rows, cols). Defaults to (3, 3).
836
+ vit_input_size (int, optional): The target size (height and width) for the Vision Transformer input. Defaults to 378.
837
+
838
+ Returns:
839
+ Tuple:
840
+ image_list (List[PIL.Image.Image]): A list of grid-combined images.
841
+ """
842
+ max_num_grids = max_grid_shape[0] * max_grid_shape[1]
843
+ # assert (
844
+ # max_grid_shape[1] == 1
845
+ # ), f"For video processing, decided to concatenate frames horizontally into a wide image."
846
+
847
+ # List to store the resulting combined images.
848
+ image_list = []
849
+
850
+ # Calculate the number of canvases needed.
851
+ num_frames = len(frames)
852
+ num_canvases = num_frames // max_num_grids
853
+ leftover_frames = num_frames % max_num_grids
854
+
855
+ # change frames (4d numpy tensor) to List[PIL.Image.Image]
856
+ frames = [Image.fromarray(frame) for frame in frames]
857
+
858
+ for canvas_idx in range(num_canvases):
859
+ # Initialize the current canvas.
860
+ combined_image = Image.new(
861
+ "RGB", (vit_input_size * max_grid_shape[0], vit_input_size * max_grid_shape[1]), color=(0, 0, 0)
862
+ )
863
+
864
+ # Determine the frames to fill in the current canvas.
865
+ start_idx = canvas_idx * max_num_grids
866
+ end_idx = min(start_idx + max_num_grids, num_frames)
867
+
868
+ for idx in range(start_idx, end_idx):
869
+ img = frames[idx]
870
+
871
+ # Resize each frame to a square shape.
872
+ img_resized = img.resize((vit_input_size, vit_input_size))
873
+
874
+ # Calculate the (row, column) position to place the frame within the grid layout.
875
+ local_idx = idx - start_idx
876
+ x_offset = (local_idx % max_grid_shape[0]) * vit_input_size
877
+ y_offset = (local_idx // max_grid_shape[0]) * vit_input_size
878
+
879
+ # Calculate the position to place the frame in the grid.
880
+ combined_image.paste(img_resized, (x_offset, y_offset))
881
+
882
+ # Append the current canvas to the result list.
883
+ image_list.append(combined_image)
884
+
885
+ if leftover_frames > 0:
886
+ # canvas_idx might be undefined; default to 0 if not previously assigned to avoid "referenced before assignment" error.
887
+ canvas_idx = num_canvases
888
+ # Add the remaining frames to the final canvas.
889
+ # combined_image = Image.new("RGB", (vit_input_size * leftover_frames, vit_input_size * 1), color=(0, 0, 0)) # hsk
890
+ combined_image = Image.new(
891
+ "RGB", (vit_input_size * max_grid_shape[0], vit_input_size * max_grid_shape[1]), color=(0, 0, 0)
892
+ )
893
+
894
+ for idx in range(leftover_frames):
895
+ img = frames[num_canvases * max_num_grids + idx]
896
+
897
+ # Resize the frame to a square (equal width and height).
898
+ img_resized = img.resize((vit_input_size, vit_input_size))
899
+
900
+ # Calculate the (row, column) position to place the frame within the grid layout.
901
+ # x_offset = (idx % leftover_frames) * vit_input_size # hsk
902
+ # y_offset = (idx // leftover_frames) * vit_input_size # hsk
903
+ x_offset = (idx % max_grid_shape[0]) * vit_input_size
904
+ y_offset = (idx // max_grid_shape[0]) * vit_input_size
905
+
906
+ # Calculate the position to place the frame within the grid layout.
907
+ combined_image.paste(img_resized, (x_offset, y_offset))
908
+
909
+ # Add the current canvas to the list of combined images.
910
+ image_list.append(combined_image)
911
+
912
+ return image_list
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_hyperclovax.HCXProcessor"
4
+ },
5
+ "processor_class": "HCXProcessor"
6
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<|fim_prefix|>",
5
+ "<|fim_middle|>",
6
+ "<|fim_suffix|>",
7
+ "<|endofprompt|>",
8
+ "<|_unuse_missing_100256|>",
9
+ "<|_unuse_missing_100261|>",
10
+ "<|_unuse_missing_100262|>",
11
+ "<|_unuse_missing_100263|>",
12
+ "<|_unuse_missing_100264|>",
13
+ "<|_unuse_missing_100265|>",
14
+ "<|_unuse_missing_100266|>",
15
+ "<|_unuse_missing_100267|>",
16
+ "<|_unuse_missing_100268|>",
17
+ "<|_unuse_missing_100269|>",
18
+ "<|_unuse_missing_100270|>",
19
+ "<|dummy3|>",
20
+ "<|im_start|>",
21
+ "<|im_end|>",
22
+ "<|stop|>",
23
+ "<|endofturn|>",
24
+ "<repo_name>",
25
+ "<file_sep>",
26
+ "<issue_start>",
27
+ "<issue_comment>",
28
+ "<issue_closed>",
29
+ "<jupyter_start>",
30
+ "<jupyter_text>",
31
+ "<jupyter_code>",
32
+ "<jupyter_output>",
33
+ "<jupyter_script>",
34
+ "<empty_output>",
35
+ "<code_to_intermediate>",
36
+ "<intermediate_to_code>",
37
+ "<pr>",
38
+ "<pr_status>",
39
+ "<pr_is_merged>",
40
+ "<pr_base>",
41
+ "<pr_file>",
42
+ "<pr_base_code>",
43
+ "<pr_diff>",
44
+ "<pr_diff_hunk>",
45
+ "<pr_comment>",
46
+ "<pr_event_id>",
47
+ "<pr_review>",
48
+ "<pr_review_state>",
49
+ "<pr_review_comment>",
50
+ "<pr_in_reply_to_review_id>",
51
+ "<pr_in_reply_to_comment_id>",
52
+ "<pr_diff_hunk_comment_line>",
53
+ "<NAME>",
54
+ "<EMAIL>",
55
+ "<KEY>",
56
+ "<PASSWORD>"
57
+ ],
58
+ "bos_token": {
59
+ "content": "<|endoftext|>",
60
+ "lstrip": false,
61
+ "normalized": false,
62
+ "rstrip": false,
63
+ "single_word": false
64
+ },
65
+ "eos_token": {
66
+ "content": "<|endofturn|>",
67
+ "lstrip": false,
68
+ "normalized": false,
69
+ "rstrip": false,
70
+ "single_word": false
71
+ },
72
+ "pad_token": {
73
+ "content": "<|endoftext|>",
74
+ "lstrip": false,
75
+ "normalized": false,
76
+ "rstrip": false,
77
+ "single_word": false
78
+ },
79
+ "unk_token": {
80
+ "content": "<|endoftext|>",
81
+ "lstrip": false,
82
+ "normalized": false,
83
+ "rstrip": false,
84
+ "single_word": false
85
+ }
86
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "100256": {
6
+ "content": "<|_unuse_missing_100256|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "100257": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "100258": {
22
+ "content": "<|fim_prefix|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "100259": {
30
+ "content": "<|fim_middle|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "100260": {
38
+ "content": "<|fim_suffix|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "100261": {
46
+ "content": "<|_unuse_missing_100261|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "100262": {
54
+ "content": "<|_unuse_missing_100262|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "100263": {
62
+ "content": "<|_unuse_missing_100263|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "100264": {
70
+ "content": "<|_unuse_missing_100264|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "100265": {
78
+ "content": "<|_unuse_missing_100265|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "100266": {
86
+ "content": "<|_unuse_missing_100266|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "100267": {
94
+ "content": "<|_unuse_missing_100267|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "100268": {
102
+ "content": "<|_unuse_missing_100268|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "100269": {
110
+ "content": "<|_unuse_missing_100269|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "100270": {
118
+ "content": "<|_unuse_missing_100270|>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": true
124
+ },
125
+ "100271": {
126
+ "content": "<|dummy3|>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": true
132
+ },
133
+ "100272": {
134
+ "content": "<|im_start|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": true
140
+ },
141
+ "100273": {
142
+ "content": "<|im_end|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": true
148
+ },
149
+ "100274": {
150
+ "content": "<|stop|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": true
156
+ },
157
+ "100275": {
158
+ "content": "<|endofturn|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": true
164
+ },
165
+ "100276": {
166
+ "content": "<|endofprompt|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": true
172
+ },
173
+ "110491": {
174
+ "content": "<repo_name>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": true
180
+ },
181
+ "110492": {
182
+ "content": "<file_sep>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": true
188
+ },
189
+ "110493": {
190
+ "content": "<issue_start>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": true
196
+ },
197
+ "110494": {
198
+ "content": "<issue_comment>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": true
204
+ },
205
+ "110495": {
206
+ "content": "<issue_closed>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": true
212
+ },
213
+ "110496": {
214
+ "content": "<jupyter_start>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": true
220
+ },
221
+ "110497": {
222
+ "content": "<jupyter_text>",
223
+ "lstrip": false,
224
+ "normalized": false,
225
+ "rstrip": false,
226
+ "single_word": false,
227
+ "special": true
228
+ },
229
+ "110498": {
230
+ "content": "<jupyter_code>",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": true
236
+ },
237
+ "110499": {
238
+ "content": "<jupyter_output>",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": true
244
+ },
245
+ "110500": {
246
+ "content": "<jupyter_script>",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": true
252
+ },
253
+ "110501": {
254
+ "content": "<empty_output>",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": true
260
+ },
261
+ "110502": {
262
+ "content": "<code_to_intermediate>",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": true
268
+ },
269
+ "110503": {
270
+ "content": "<intermediate_to_code>",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": true
276
+ },
277
+ "110504": {
278
+ "content": "<pr>",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": true
284
+ },
285
+ "110505": {
286
+ "content": "<pr_status>",
287
+ "lstrip": false,
288
+ "normalized": false,
289
+ "rstrip": false,
290
+ "single_word": false,
291
+ "special": true
292
+ },
293
+ "110506": {
294
+ "content": "<pr_is_merged>",
295
+ "lstrip": false,
296
+ "normalized": false,
297
+ "rstrip": false,
298
+ "single_word": false,
299
+ "special": true
300
+ },
301
+ "110507": {
302
+ "content": "<pr_base>",
303
+ "lstrip": false,
304
+ "normalized": false,
305
+ "rstrip": false,
306
+ "single_word": false,
307
+ "special": true
308
+ },
309
+ "110508": {
310
+ "content": "<pr_file>",
311
+ "lstrip": false,
312
+ "normalized": false,
313
+ "rstrip": false,
314
+ "single_word": false,
315
+ "special": true
316
+ },
317
+ "110509": {
318
+ "content": "<pr_base_code>",
319
+ "lstrip": false,
320
+ "normalized": false,
321
+ "rstrip": false,
322
+ "single_word": false,
323
+ "special": true
324
+ },
325
+ "110510": {
326
+ "content": "<pr_diff>",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": false,
330
+ "single_word": false,
331
+ "special": true
332
+ },
333
+ "110511": {
334
+ "content": "<pr_diff_hunk>",
335
+ "lstrip": false,
336
+ "normalized": false,
337
+ "rstrip": false,
338
+ "single_word": false,
339
+ "special": true
340
+ },
341
+ "110512": {
342
+ "content": "<pr_comment>",
343
+ "lstrip": false,
344
+ "normalized": false,
345
+ "rstrip": false,
346
+ "single_word": false,
347
+ "special": true
348
+ },
349
+ "110513": {
350
+ "content": "<pr_event_id>",
351
+ "lstrip": false,
352
+ "normalized": false,
353
+ "rstrip": false,
354
+ "single_word": false,
355
+ "special": true
356
+ },
357
+ "110514": {
358
+ "content": "<pr_review>",
359
+ "lstrip": false,
360
+ "normalized": false,
361
+ "rstrip": false,
362
+ "single_word": false,
363
+ "special": true
364
+ },
365
+ "110515": {
366
+ "content": "<pr_review_state>",
367
+ "lstrip": false,
368
+ "normalized": false,
369
+ "rstrip": false,
370
+ "single_word": false,
371
+ "special": true
372
+ },
373
+ "110516": {
374
+ "content": "<pr_review_comment>",
375
+ "lstrip": false,
376
+ "normalized": false,
377
+ "rstrip": false,
378
+ "single_word": false,
379
+ "special": true
380
+ },
381
+ "110517": {
382
+ "content": "<pr_in_reply_to_review_id>",
383
+ "lstrip": false,
384
+ "normalized": false,
385
+ "rstrip": false,
386
+ "single_word": false,
387
+ "special": true
388
+ },
389
+ "110518": {
390
+ "content": "<pr_in_reply_to_comment_id>",
391
+ "lstrip": false,
392
+ "normalized": false,
393
+ "rstrip": false,
394
+ "single_word": false,
395
+ "special": true
396
+ },
397
+ "110519": {
398
+ "content": "<pr_diff_hunk_comment_line>",
399
+ "lstrip": false,
400
+ "normalized": false,
401
+ "rstrip": false,
402
+ "single_word": false,
403
+ "special": true
404
+ },
405
+ "110520": {
406
+ "content": "<NAME>",
407
+ "lstrip": false,
408
+ "normalized": false,
409
+ "rstrip": false,
410
+ "single_word": false,
411
+ "special": true
412
+ },
413
+ "110521": {
414
+ "content": "<EMAIL>",
415
+ "lstrip": false,
416
+ "normalized": false,
417
+ "rstrip": false,
418
+ "single_word": false,
419
+ "special": true
420
+ },
421
+ "110522": {
422
+ "content": "<KEY>",
423
+ "lstrip": false,
424
+ "normalized": false,
425
+ "rstrip": false,
426
+ "single_word": false,
427
+ "special": true
428
+ },
429
+ "110523": {
430
+ "content": "<PASSWORD>",
431
+ "lstrip": false,
432
+ "normalized": false,
433
+ "rstrip": false,
434
+ "single_word": false,
435
+ "special": true
436
+ }
437
+ },
438
+ "additional_special_tokens": [
439
+ "<|endoftext|>",
440
+ "<|fim_prefix|>",
441
+ "<|fim_middle|>",
442
+ "<|fim_suffix|>",
443
+ "<|endofprompt|>",
444
+ "<|_unuse_missing_100256|>",
445
+ "<|_unuse_missing_100261|>",
446
+ "<|_unuse_missing_100262|>",
447
+ "<|_unuse_missing_100263|>",
448
+ "<|_unuse_missing_100264|>",
449
+ "<|_unuse_missing_100265|>",
450
+ "<|_unuse_missing_100266|>",
451
+ "<|_unuse_missing_100267|>",
452
+ "<|_unuse_missing_100268|>",
453
+ "<|_unuse_missing_100269|>",
454
+ "<|_unuse_missing_100270|>",
455
+ "<|dummy3|>",
456
+ "<|im_start|>",
457
+ "<|im_end|>",
458
+ "<|stop|>",
459
+ "<|endofturn|>",
460
+ "<repo_name>",
461
+ "<file_sep>",
462
+ "<issue_start>",
463
+ "<issue_comment>",
464
+ "<issue_closed>",
465
+ "<jupyter_start>",
466
+ "<jupyter_text>",
467
+ "<jupyter_code>",
468
+ "<jupyter_output>",
469
+ "<jupyter_script>",
470
+ "<empty_output>",
471
+ "<code_to_intermediate>",
472
+ "<intermediate_to_code>",
473
+ "<pr>",
474
+ "<pr_status>",
475
+ "<pr_is_merged>",
476
+ "<pr_base>",
477
+ "<pr_file>",
478
+ "<pr_base_code>",
479
+ "<pr_diff>",
480
+ "<pr_diff_hunk>",
481
+ "<pr_comment>",
482
+ "<pr_event_id>",
483
+ "<pr_review>",
484
+ "<pr_review_state>",
485
+ "<pr_review_comment>",
486
+ "<pr_in_reply_to_review_id>",
487
+ "<pr_in_reply_to_comment_id>",
488
+ "<pr_diff_hunk_comment_line>",
489
+ "<NAME>",
490
+ "<EMAIL>",
491
+ "<KEY>",
492
+ "<PASSWORD>"
493
+ ],
494
+ "auto_map": {
495
+ "AutoProcessor": "processing_hyperclovax.HCXProcessor"
496
+ },
497
+ "bos_token": "<|endoftext|>",
498
+ "clean_up_tokenization_spaces": true,
499
+ "eos_token": "<|endofturn|>",
500
+ "errors": "replace",
501
+ "extra_special_tokens": {},
502
+ "model_max_length": 1000000000000000019884624838656,
503
+ "pad_token": "<|endoftext|>",
504
+ "processor_class": "HCXProcessor",
505
+ "tokenizer_class": "GPT2Tokenizer",
506
+ "unk_token": "<|endoftext|>"
507
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff