pan-li commited on
Commit
45ae458
·
verified ·
1 Parent(s): c75537f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ inverse_folding.png filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GENBIO AI COMMUNITY LICENSE AGREEMENT
2
+
3
+ This GenBio AI Community License Agreement (the “License”) constitutes an agreement between you or the legal entity you represent (“you” or “your”) and GENBIO.AI, INC. (“GenBio”), governing your use of the GenBio Materials. If you are using the GenBio Materials on behalf of a legal entity, you represent and warrant to GenBio that you have full legal authority to act on behalf of that legal entity as applicable under the License. If you do not have the authority to accept this License or if you disagree with any or all of the License, you shall not use the GenBio Materials in any manner. By using or distributing any portion or element of the GenBio Materials, you imply your agreement to be bound by the License.
4
+
5
+ “GenBio Materials” means any datasets, code, model weights or any other materials provided by GenBio at the following GitHub Page https://github.com/genbio-ai or Hugging Face Page https://huggingface.co/genbio-ai, including any updates or modifications made from time to time, whether in Source or Object form, and is made available to you under this License.
6
+
7
+
8
+ 1. License Grant.
9
+ 1.1 License Scope. Subject to the terms of this License, GenBio grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable and royalty-free limited license under GenBio’s intellectual property or other rights owned by GenBio embodied in the GenBio Materials to use, reproduce, distribute, and create Derivative Works of, and make modifications to, the GenBio Materials for any Non-Commercial Purposes.
10
+ 1.2 Use Restrictions. Restricted activities in relation to the License or use of GenBio Materials include:
11
+ 1.2.1 You shall use the GenBio Materials, Contributions, Derivative Works, Outputs and Output Derivatives (as defined below) solely for Non-Commercial Purposes;
12
+ 1.2.2 You shall not, directly or indirectly: (a) use or provide access to any Outputs or Output Derivatives to train, optimize, improve, or otherwise enhance the functionality or performance of any machine learning models or related technologies that are similar to the GenBio Materials; (b) engage in any form of model distillation or other methods that would achieve the purposes described in subsection (a) above. Notwithstanding the foregoing, you may use Outputs and Output Derivatives to train, optimize, improve, or enhance the functionality or performance of: (i) The GenBio Materials itself; and (ii) downstream Derivative Works of the GenBio Materials;
13
+ 1.2.3 Your use of the GenBio Materials shall be subject to any additional terms and conditions that: (a) GenBio provides to you separately; or (b) GenBio otherwise makes available to you.
14
+
15
+ 2. Sharing and Distribution.
16
+ 2.1 Subject to Section 1, if you distribute or make available the GenBio Materials or a Derivative Work to a third party for your Non-Commercial Purposes, in Source or Object form, you shall:
17
+ 2.1.1 provide a copy of this License to that third party;
18
+ 2.1.2 retain the following attribution notice within a “Notice” text file distributed as a part of such copies: “This is licensed under the GenBio AI Community License Agreement, Copyright © GENBIO.AI, INC. All Rights Reserved”; and
19
+ 2.1.3 prominently display “Powered by GenBio AI” on a related website, user interface, blogpost, about page, or product documentation.
20
+ 2.2 If You create a Derivative Work, you may add your own attribution notice(s) to the “Notice” text file included with that Derivative Work, provided that you clearly indicate which attributions apply to the GenBio Materials and state in the “Notice” text file that you changed the GenBio Materials and how it was modified.
21
+
22
+ 3. Submission of Contribution.
23
+ Unless you explicitly state otherwise, any Contribution intentionally submitted for inclusion in the GenBio Materials by you to GenBio shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with GenBio regarding such Contributions.
24
+
25
+ 4. Export Control.
26
+ You shall comply with the applicable U.S. Foreign Corrupt Practices Act and all applicable export laws, restrictions and regulations of the U.S. Department of Commerce, and any other applicable U.S. and foreign authority.
27
+
28
+ 5. Disclaimer of Warranty.
29
+ GENBIO MATERIALS PROVIDED BY GENBIO OR ANY OUTPUT YOU RECEIVED ARE PROVIDED “AS IS.” EXCEPT TO THE EXTENT PROHIBITED BY LAW. GENBIO MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, WHETHER EXPRESS, IMPLIED OR OTHERWISE, REGARDING THE ACCURACY, COMPLETENESS OR PERFORMANCE OF THE SERVICES AND YOUR OUTPUT, OR WITH RESPECT TO SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT.
30
+
31
+ 6. Limitation of Liability.
32
+ In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the GenBio Materials (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
33
+
34
+ 7. General Terms.
35
+ 7.1 Relationship of Parties. You and GenBio are independent contractors, and nothing herein shall be deemed to constitute either party as the agent or representative of the other or both parties as joint venturers or partners for any purpose.
36
+ 7.2 Assignment. This License and the rights and obligations herein may not be assigned or transferred, in whole or in part, by You without the prior written consent of GenBio. Any assignment in violation of this provision is void. GenBio may freely assign or transfer this License, in whole or in part. This License shall be binding upon, and inure to the benefit of, the successors and permitted assigns of the parties.
37
+ 7.3 Governing Law. This License shall be governed, construed and interpreted in accordance with the laws of the State of California, without giving effect to principles of conflicts of law. Each of the parties to this License consents to the exclusive jurisdiction and venue of the courts of the state and federal courts of California.
38
+ 7.4 Severability. If any provision of this License is held to be invalid, illegal or unenforceable in any respect, that provision shall be limited or eliminated to the minimum extent necessary so that this License otherwise remains in full force and effect and enforceable.
39
+
40
+ 8. Definitions.
41
+ 8.1 “Commercial Entity” means any entity engaged in any activity intended for or directed toward commercial advantage or monetary compensation, including, without limitation, the development of any product or service intended to be sold or made available for a fee. For the purpose of this License, references to a Commercial Entity expressly exclude any universities, non-profit organizations, not-for-profit entities, research institutes and educational and government bodies.
42
+ 8.2 “Contribution” means any work of authorship, including the original version of the GenBio Materials and any modifications or additions to that GenBio Materials or Derivative Works thereof, that is intentionally submitted to GenBio for inclusion in the GenBio Materials by the copyright owner or by an individual or legal entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to GenBio or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, GenBio for the purpose of discussing and improving the GenBio Materials, but excluding Outputs and all communications that are conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution”.
43
+ 8.3 “Contributor” means GenBio and any individual or legal entity on behalf of whom a Contribution has been received by GenBio and subsequently incorporated within the GenBio Materials.
44
+ 8.4 “Derivative Work” means any work, whether in Source or Object form, that is based on (or derived from) the GenBio Materials and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the GenBio Materials and Derivative Works thereof.
45
+ 8.5 “Non-Commercial Purposes” means uses not intended for or directed toward commercial advantage or monetary compensation, or the facilitation of development of any product or service to be sold or made available for a fee. For the avoidance of doubt, the provision of Outputs as a service is not a Non-Commercial Purpose.
46
+ 8.6 “Object” means any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
47
+ 8.7 “Output” means any output, including any protein sequence, structure prediction, functional annotation, molecule, descriptions of a molecule, model, sequence, text, and/or image that is elicited directly or indirectly by, or otherwise made available to, you in connection with your use of the GenBio Materials, including, but not limited to, the use of AI-Powered Technology. For the avoidance of doubt, it includes any intermediate results, such as activations across model layers, intermediate outputs from model layers (e.g., attention maps), as well as gradients and embeddings produced by the GenBio Materials.
48
+ 8.8 “Output Derivatives” means any enhancements, modifications and derivative works of Outputs (including, but not limited to, any derivative sequences or molecules).
49
+ 8.9 “Source” means the preferred form for making modifications, including but not limited to GenBio Materials source code, documentation source, and configuration files.
README.md CHANGED
@@ -1,3 +1,174 @@
1
- ---
2
- license: unknown
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ ---
4
+
5
+ # AIDO.RAGProtein-16B
6
+
7
+ AIDO.RAGProtein-16B is a multimodal protein language model that integrates MSA and structural data based on the [AIDO.Protein-16B](https://huggingface.co/genbio-ai/AIDO.Protein-16B) model. Its training is divided into multiple stages, 100 billion tokens are trained on UniRef50/UniClust30 MSA data, and 80 billion tokens are trained on AlphaFold Database MSA and structural data.
8
+
9
+ ## Model Architecture Details
10
+
11
+ AIDO.RAGProtein-16B is a transformer encoder-only architecture with the dense MLP layer in each transformer block replaced by a sparse MoE layer. It uses single amino acid tokenization and is optimized using a masked languange modeling (MLM) training objective. For each token, 2 experts will be selectively activated by the top-2 rounting mechiansim.
12
+
13
+ <center><img src="proteinmoe_architecture.png" alt="An Overview of AIDO.Protein" style="width:70%; height:auto;" /></center>
14
+
15
+ More architecture details are shown below:
16
+
17
+ | Model Arch Component | Value |
18
+ | ----------------------- | :---: |
19
+ | Num Attention Head | 36 |
20
+ | Num Hidden Layer | 36 |
21
+ | Hidden Size | 2304 |
22
+ | FFN Hidden Size | 7680 |
23
+ | Num MoE Layer per Block | 8 |
24
+ | Num MoE Layer per Token | 2 |
25
+ | Vocab Size | 44 |
26
+ | Context Length | 2048 |
27
+
28
+ ## Pre-training of AIDO.RAGProtein-16B
29
+
30
+ Here we briefly introduce the details of pre-training of AIDO.RAGProtein-16B. Mainly divided into three stages: (1) 1D -> 2D finetuning; (2) UniRef50/Uniclust30 MSA finetuning; (3) AlphaFold Database MSA & Structure tokens finetuning.
31
+
32
+ ### Data
33
+
34
+ **UniRef50/Uniclust30 MSA dataset**: We utilized sequences from UniRef50 as queries to search for homologous sequences in UniClust30, subsequently constructing multiple sequence alignments (MSAs). UniRef50 comprises a total of 53.6 million sequences. Using HHblits, we searched all sequences, identifying over 25 homologous sequences for 23.7 million of them. This dataset was directly used as the training set, referred to as `HHblits_MSA`. The remaining 29.9 million sequences were input into MSA Retriever, resulting in 7.7 million sequences with more than 25 homologous sequences. This dataset was designated as `Retriever_MSA`. During training, RAGPLM randomly sampled from the two datasets with probabilities of 0.75 and 0.25
35
+
36
+ **AlphaFold Database MSA & Structure dataset**: We downloaded all the structural data from the AlphaFold Database and only kept the structures where the amino acid ratio of pLDDT>70 was greater than 40%. Then we used `mmseqs` to cluster the remaining sequences with `seq id=0.5`, and retained a representative sequence for each class. Final we get 46.9 million sequence/structure pairs. For each structure, we used [genbio-ai/AIDO.StructureTokenizer](https://huggingface.co/genbio-ai/AIDO.StructureTokenizer) to obtain the corresponding structure tokens and structure embedding. And used [MSA Retriever](https://www.biorxiv.org/content/10.1101/2024.12.02.626519v1) to obtain the MSA corresponding to the sequence.
37
+
38
+ ### Training Details
39
+
40
+ Model training is divided into three stages:
41
+
42
+ #### (1) 1D -> 2D finetuning:
43
+
44
+ Same training data with [AIDO.Protein-16B](https://huggingface.co/genbio-ai/AIDO.Protein-16B), but use [2D rotary position embedding](https://arxiv.org/abs/2406.05347) to encode the tokens;
45
+
46
+ #### (2) UniRef50/Uniclust30 MSA finetuning
47
+
48
+ We used UniRef50/Uniclust30 MSA dataset to finetune the model from stage (1). Refer [AIDO.RAGPLM](https://www.biorxiv.org/content/10.1101/2024.12.02.626519v1) for more information.
49
+
50
+ #### (3) AFDB MSA & Structure tokens finetuning:
51
+
52
+ We fine-tuned a pretrained masked language model using MSA data by concatenating the query sequence with homologous sequences. The input structure embedding (hidden dimension 384) is linearly mapped to 2304 and then added to the corresponding embedding of the query sequence tokens.
53
+
54
+ **Mask of sequences**: We introduced several modifications to the standard BERT masking strategy: (1) We randomly sampled `0.05×L` span positions from a query sequence of length `L`, with span lengths following a geometric distribution (`p=0.2`), and capped the maximum length at 10. Our experiments revealed that this settings lead to an average of 15% of the query tokens were masked. (2) To prevent information leakage, when a residue was selected, all residues at the same index across all sequences (the column of the MSA matrix) were also masked. (3) When a column of MSA was selected for masking, the entire column was replaced with the `<MASK>` token in 80% of cases, with random amino acids in 10% of cases, and remained unchanged in the remaining 10% of cases.
55
+
56
+ **Mask of structure**: In 20% of the cases, we randomly replaced the structure embedding with 0; in 80% of the cases, we randomly sampled a certain number of amino acids using the BetaLinear30 distribution and masked their structure embedding. The BetaLinear30 distribution is defined as a combination of 20% of the [0, 1] uniform distribution and 80% of the Beta(3, 9) Beta distribution.
57
+
58
+ **Positional embedding**: To help the model distinguish which tokens are from the same chain and which tokens have the same residue index, we use [2D rotary position embedding](https://arxiv.org/abs/2406.05347) to encode the tokens.
59
+
60
+ **Loss**: The loss function consists of a sequence loss function and a structure loss function (weights are 1.0 and 0.01 respectively). The sequence loss function is the CrossEntropy function that recovers the masked sequence tokens, and the structure loss function is the CrossEntropy function that predicts each masked structure token.
61
+
62
+ | Hyper-params | (1) 1D -> 2D finetuning | (2) UniRef50/Uniclust30 MSA finetuning | (3) AFDB MSA & Structure tokens finetuning |
63
+ | --------------------------- | :---------------------: | :------------------------------------: | :----------------------------------------: |
64
+ | Data | ColabFoldDB, UniRef | HHblits_MSA, Retriever_MSA | AFDB MSA & Structure tokens |
65
+ | Global Batch Size | 512 | 256 | 256 |
66
+ | Sequence length | 2048 | 12800 | 12800 |
67
+ | Per Device Micro Batch Size | 1 | 1 | 1 |
68
+ | Precision | Mixed FP32-FP16 | Mixed FP32-FP16 | Mixed FP32-FP16 |
69
+ | 1st Stage LR | [5e-6,5e-5] | [1e-6, 1e-5] | 1e-5 |
70
+ | 1st Stage Num Tokens | 10 billion | 100 billion | 80 billion |
71
+
72
+ ### Tokenization
73
+
74
+ We encode protein sequence with single amino acid resolution with 44 vocabularies, where 24 tokens represent amino acid types and 20 are special tokens. Sequences were also suffixed with a `[SEP]` token as hooks for downstream tasks.
75
+
76
+ ## How to Use
77
+
78
+ ### Build any downstream models from this backbone with ModelGenerator
79
+
80
+ For more information, visit: [Model Generator](https://github.com/genbio-ai/modelgenerator)
81
+
82
+ ```bash
83
+ mgen fit --model SequenceClassification --model.backbone aido_ragprotein_16b --data SequenceClassificationDataModule --data.path <hf_or_local_path_to_your_dataset>
84
+ mgen test --model SequenceClassification --model.backbone aido_ragprotein_16b --data SequenceClassificationDataModule --data.path <hf_or_local_path_to_your_dataset>
85
+ ```
86
+
87
+ ### Or use directly in Python
88
+
89
+ #### Embedding
90
+
91
+ ```python
92
+ import torch
93
+ from modelgenerator.tasks import Embed
94
+ model = Embed.from_config({"model.backbone": "aido_protein_16b_ragplm"}).eval()
95
+ model.backbone.max_length = 12800
96
+ data = torch.load("ModelGenerator/experiments/AIDO.RAGPLM/examples.pt", 'cpu')[0]
97
+ transformed_batch = model.transform(data)
98
+ with torch.no_grad():
99
+ embedding = model(transformed_batch)
100
+
101
+ print(embedding.shape)
102
+ ```
103
+
104
+ #### Sequence Level Classification
105
+
106
+ ```python
107
+ import torch
108
+ from modelgenerator.tasks import SequenceClassification
109
+ model = SequenceClassification.from_config({"model.backbone": "aido_protein_16b_ragplm", "model.n_classes": 2}).eval()
110
+ model.backbone.max_length = 12800
111
+ data = torch.load("ModelGenerator/experiments/AIDO.RAGPLM/examples.pt", 'cpu')[0]
112
+ transformed_batch = model.transform(data)
113
+ with torch.no_grad():
114
+ logits = model(transformed_batch)
115
+
116
+ print(logits)
117
+ print(torch.argmax(logits, dim=-1))
118
+ ```
119
+
120
+ #### Token Level Classification
121
+
122
+ ```python
123
+ import torch
124
+ from modelgenerator.tasks import TokenClassification
125
+ model = TokenClassification.from_config({"model.backbone": "aido_protein_16b_ragplm", "model.n_classes": 3}).eval()
126
+ model.backbone.max_length = 12800
127
+ data = torch.load("ModelGenerator/experiments/AIDO.RAGPLM/examples.pt", 'cpu')[0]
128
+ transformed_batch = model.transform(data)
129
+ with torch.no_grad():
130
+ logits = model(transformed_batch)
131
+
132
+ print(logits)
133
+ print(torch.argmax(logits, dim=-1))
134
+ ```
135
+
136
+ #### Regression
137
+
138
+ ```python
139
+ from modelgenerator.tasks import SequenceRegression
140
+ model = SequenceRegression.from_config({"model.backbone": "aido_protein_16b_ragplm"}).eval()
141
+ model.backbone.max_length = 12800
142
+ data = torch.load("ModelGenerator/experiments/AIDO.RAGPLM/examples.pt", 'cpu')[0]
143
+ transformed_batch = model.transform(data)
144
+ with torch.no_grad():
145
+ logits = model(transformed_batch)
146
+
147
+ print(logits.shape)
148
+ ```
149
+
150
+ # Citation
151
+
152
+ Please cite AIDO.RAGProtein-16B using the following BibTex code:
153
+
154
+ ```
155
+ @inproceedings{sun_mixture_2024,
156
+ title = {Mixture of Experts Enable Efficient and Effective Protein Understanding and Design},
157
+ url = {https://www.biorxiv.org/content/10.1101/2024.11.29.625425v1},
158
+ doi = {10.1101/2024.11.29.625425},
159
+ publisher = {bioRxiv},
160
+ author = {Sun, Ning and Zou, Shuxian and Tao, Tianhua and Mahbub, Sazan and Li, Dian and Zhuang, Yonghao and Wang, Hongyi and Cheng, Xingyi and Song, Le and Xing, Eric P.},
161
+ year = {2024},
162
+ booktitle={NeurIPS 2024 Workshop on AI for New Drug Modalities},
163
+ }
164
+
165
+ @article {Li2024.12.02.626519,
166
+ author = {Li, Pan and Cheng, Xingyi and Song, Le and Xing, Eric},
167
+ title = {Retrieval Augmented Protein Language Models for Protein Structure Prediction},
168
+ url = {https://www.biorxiv.org/content/10.1101/2024.12.02.626519v1},
169
+ year = {2024},
170
+ doi = {10.1101/2024.12.02.626519},
171
+ publisher = {bioRxiv},
172
+ booktitle={NeurIPS 2024 Workshop on Machine Learning in Structural Biology},
173
+ }
174
+ ```
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_linear_bias": true,
3
+ "architectures": [
4
+ "FM4BioModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0,
7
+ "experts_per_token": 2,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "swiglu",
10
+ "hidden_dropout_prob": 0,
11
+ "hidden_size": 2304,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 7680,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 2048,
16
+ "model_type": "fm4bio",
17
+ "moe": true,
18
+ "normalization_type": "RMSNorm",
19
+ "num_attention_heads": 36,
20
+ "num_experts": 8,
21
+ "num_hidden_layers": 36,
22
+ "output_vocab_size": null,
23
+ "pad_token_id": 0,
24
+ "position_embedding_type": "rope_2d",
25
+ "rotary_percent": 1.0,
26
+ "seq_len_interpolation_factor": null,
27
+ "str_embedding_in": 384,
28
+ "tie_word_embeddings": false,
29
+ "tokenizer_insert_str_tokens": false,
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.48.3",
32
+ "type_vocab_size": 2,
33
+ "use_cache": true,
34
+ "use_lm_head": false,
35
+ "vocab_size": 640
36
+ }
pytorch_model-00001-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:354597127927f71d14a63ef826943e8254ccca9ee7fb2b9323fe07c268442b08
3
+ size 4937630903
pytorch_model-00002-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2301e538c3fa7299e7ff5dd3c74ac119d1584b247a0dc1d0f03d695a0d3226d0
3
+ size 4928193017
pytorch_model-00003-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ef72208cd1bf86314c8d700fbe59304e623e23bfe2f51088e75a91d9fce0309
3
+ size 4928192945
pytorch_model-00004-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f41255d0584e9c1b50cd50bde3d19b9c2814b88bd47512b4254d4a5225d6cb3
3
+ size 4928192945
pytorch_model-00005-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a73af58191abeca4f0b21a309f874cdc742ad7635c52c5fe2f2ba875f3a9500
3
+ size 4984745566
pytorch_model-00006-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:770fb8345c2046efb5de3370a7667ac12f72ec63c54f609fe472136edbfad860
3
+ size 4998981943
pytorch_model-00007-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1c4c7cab586433aef098d87b9cb02e3b0acbccb5c38dbbfafddaf5098de3cc9
3
+ size 4928193097
pytorch_model-00008-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a3485e9185b42f1e60ffbfb660826725368b69d797b00bfc37e86a57052981c
3
+ size 4928193073
pytorch_model-00009-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9bc4a1a420cc991d82464fd072aa8bda1eadeae123b4efe10b5e4f7bddd229b
3
+ size 4984745566
pytorch_model-00010-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:202779ecb0e2745b3ab4668f433581ce32a4cebbbae1dcc88fc24e97e77f8d51
3
+ size 4998981979
pytorch_model-00011-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:112fa217beac23ca7a242a19392d1ed528bae1e8dfd2194f4efd0dbdcac946b9
3
+ size 4928193145
pytorch_model-00012-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aa9af9a3c14c266a4ea12154ca50527de64f90646b0df20a069df4e64cb4fb5
3
+ size 4928193073
pytorch_model-00013-of-00013.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d3899a4992d70ae352338aa11310bc6338d9b8efa30f1e75529c66d8807b4a7
3
+ size 4843135119
pytorch_model.bin.index.json ADDED
The diff for this file is too large to render. See raw diff