ZhiyuanChen commited on
Commit
a10d76a
·
verified ·
1 Parent(s): 0c0276a

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: rna
3
+ tags:
4
+ - Biology
5
+ - RNA
6
+ license: agpl-3.0
7
+ datasets:
8
+ - multimolecule/rnacentral
9
+ library_name: multimolecule
10
+ pipeline_tag: fill-mask
11
+ mask_token: "<mask>"
12
+ widget:
13
+ - example_title: "HIV-1"
14
+ text: "GGUC<mask>CUCUGGUUAGACCAGAUCUGAGCCU"
15
+ output:
16
+ - label: "U"
17
+ score: 0.3169708847999573
18
+ - label: "W"
19
+ score: 0.12581486999988556
20
+ - label: "K"
21
+ score: 0.09805052727460861
22
+ - label: "D"
23
+ score: 0.07830371707677841
24
+ - label: "Y"
25
+ score: 0.05044170096516609
26
+ - example_title: "microRNA-21"
27
+ text: "UAGC<mask>UAUCAGACUGAUGUUG"
28
+ output:
29
+ - label: "U"
30
+ score: 0.3052324652671814
31
+ - label: "W"
32
+ score: 0.1103190928697586
33
+ - label: "K"
34
+ score: 0.0816153734922409
35
+ - label: "Y"
36
+ score: 0.07827945053577423
37
+ - label: "D"
38
+ score: 0.06427925080060959
39
+ ---
40
+
41
+ # AIDO.RNA
42
+
43
+ Pre-trained model on non-coding RNA (ncRNA) using a masked language modeling (MLM) objective.
44
+
45
+ ## Disclaimer
46
+
47
+ This is an UNOFFICIAL implementation of the [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345) by Shuxian Zou, Tianhua Tao, Sazan Mahbub, et al.
48
+
49
+ The OFFICIAL repository of AIDO.RNA is at [genbio-ai/AIDO](https://github.com/genbio-ai/AIDO).
50
+
51
+ > [!WARNING]
52
+ > The MultiMolecule team is aware of a potential risk in reproducing the results of AIDO.RNA.
53
+ >
54
+ > The original implementation of AIDO.RNA uses a special tokenizer that identifies `U` and `T` as different tokens.
55
+ >
56
+ > This behaviour is not supported by MultiMolecule.
57
+
58
+ > [!TIP]
59
+ > The MultiMolecule team has confirmed that the provided model and checkpoints are producing the same intermediate representations as the original implementation.
60
+
61
+ **The team releasing AIDO.RNA did not write this model card for this model so this model card has been written by the MultiMolecule team.**
62
+
63
+ ## Model Details
64
+
65
+ AIDO.RNA is a [bert](https://huggingface.co/google-bert/bert-base-uncased)-style model pre-trained on a large corpus of non-coding RNA sequences in a self-supervised fashion. This means that the model was trained on the raw nucleotides of RNA sequences only, with an automatic process to generate inputs and labels from those texts. Please refer to the [Training Details](#training-details) section for more information on the training process.
66
+
67
+ ### Variants
68
+
69
+ - **[multimolecule/aido.rna-650m](https://huggingface.co/multimolecule/aido.rna-650m)**: The AIDO.RNA model with 650 million parameters.
70
+ - **[multimolecule/aido.rna-1.6b](https://huggingface.co/multimolecule/aido.rna-1.6b)**: The AIDO.RNA model with 1.6 billion parameters.
71
+
72
+ ### Model Specification
73
+
74
+ <table>
75
+ <thead>
76
+ <tr>
77
+ <th>Variants</th>
78
+ <th>Num Layers</th>
79
+ <th>Hidden Size</th>
80
+ <th>Num Heads</th>
81
+ <th>Intermediate Size</th>
82
+ <th>Num Parameters (M)</th>
83
+ <th>FLOPs (G)</th>
84
+ <th>MACs (G)</th>
85
+ <th>Max Num Tokens</th>
86
+ </tr>
87
+ </thead>
88
+ <tbody>
89
+ <tr>
90
+ <td>AIDO.RNA-650M</td>
91
+ <td>33</td>
92
+ <td>1280</td>
93
+ <td>20</td>
94
+ <td>3392</td>
95
+ <td>648.38</td>
96
+ <td>168.25</td>
97
+ <td>80.09</td>
98
+ <td rowspan="2">1022</td>
99
+ </tr>
100
+ <tr>
101
+ <td>AIDO.RNA-1.6B</td>
102
+ <td>32</td>
103
+ <td>2048</td>
104
+ <td>32</td>
105
+ <td>5440</td>
106
+ <td>1650.29</td>
107
+ <td>415.67</td>
108
+ <td>207.77</td>
109
+ </tr>
110
+ </tbody>
111
+ </table>
112
+
113
+ ### Links
114
+
115
+ - **Code**: [multimolecule.aido_rna](https://github.com/DLS5-Omics/multimolecule/tree/master/multimolecule/models/aido_rna)
116
+ - **Weights**: [multimolecule/aido.rna](https://huggingface.co/multimolecule/aido.rna)
117
+ - **Data**: [multimolecule/rnacentral](https://huggingface.co/datasets/multimolecule/rnacentral)
118
+ - **Paper**: [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345)
119
+ - **Developed by**: Shuxian Zou, Tianhua Tao, Sazan Mahbub, Caleb N. Ellington, Robin Algayres, Dian Li, Yonghao Zhuang, Hongyi Wang, Le Song, Eric P. Xing
120
+ - **Model type**: [BERT](https://huggingface.co/google-bert/bert-base-uncased)
121
+ - **Original Repository**: [genbio-ai/AIDO](https://github.com/genbio-ai/AIDO)
122
+
123
+ ## Usage
124
+
125
+ The model file depends on the [`multimolecule`](https://multimolecule.danling.org) library. You can install it using pip:
126
+
127
+ ```bash
128
+ pip install multimolecule
129
+ ```
130
+
131
+ ### Direct Use
132
+
133
+ You can use this model directly with a pipeline for masked language modeling:
134
+
135
+ ```python
136
+ >>> import multimolecule # you must import multimolecule to register models
137
+ >>> from transformers import pipeline
138
+
139
+ >>> unmasker = pipeline("fill-mask", model="multimolecule/aido.rna-650m")
140
+ >>> unmasker("gguc<mask>cucugguuagaccagaucugagccu")
141
+ [{'score': 0.3169708847999573,
142
+ 'token': 9,
143
+ 'token_str': 'U',
144
+ 'sequence': 'G G U C U C U C U G G U U A G A C C A G A U C U G A G C C U'},
145
+ {'score': 0.12581486999988556,
146
+ 'token': 14,
147
+ 'token_str': 'W',
148
+ 'sequence': 'G G U C W C U C U G G U U A G A C C A G A U C U G A G C C U'},
149
+ {'score': 0.09805052727460861,
150
+ 'token': 15,
151
+ 'token_str': 'K',
152
+ 'sequence': 'G G U C K C U C U G G U U A G A C C A G A U C U G A G C C U'},
153
+ {'score': 0.07830371707677841,
154
+ 'token': 18,
155
+ 'token_str': 'D',
156
+ 'sequence': 'G G U C D C U C U G G U U A G A C C A G A U C U G A G C C U'},
157
+ {'score': 0.05044170096516609,
158
+ 'token': 12,
159
+ 'token_str': 'Y',
160
+ 'sequence': 'G G U C Y C U C U G G U U A G A C C A G A U C U G A G C C U'}]
161
+ ```
162
+
163
+ ### Downstream Use
164
+
165
+ #### Extract Features
166
+
167
+ Here is how to use this model to get the features of a given sequence in PyTorch:
168
+
169
+ ```python
170
+ from multimolecule import RnaTokenizer, AidoRnaModel
171
+
172
+
173
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m")
174
+ model = AidoRnaModel.from_pretrained("multimolecule/aido.rna-650m")
175
+
176
+ text = "UAGCUUAUCAGACUGAUGUUG"
177
+ input = tokenizer(text, return_tensors="pt")
178
+
179
+ output = model(**input)
180
+ ```
181
+
182
+ #### Sequence Classification / Regression
183
+
184
+ > [!NOTE]
185
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for sequence classification or regression.
186
+
187
+ Here is how to use this model as backbone to fine-tune for a sequence-level task in PyTorch:
188
+
189
+ ```python
190
+ import torch
191
+ from multimolecule import RnaTokenizer, AidoRnaForSequencePrediction
192
+
193
+
194
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m")
195
+ model = AidoRnaForSequencePrediction.from_pretrained("multimolecule/aido.rna-650m")
196
+
197
+ text = "UAGCUUAUCAGACUGAUGUUG"
198
+ input = tokenizer(text, return_tensors="pt")
199
+ label = torch.tensor([1])
200
+
201
+ output = model(**input, labels=label)
202
+ ```
203
+
204
+ #### Token Classification / Regression
205
+
206
+ > [!NOTE]
207
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for token classification or regression.
208
+
209
+ Here is how to use this model as backbone to fine-tune for a nucleotide-level task in PyTorch:
210
+
211
+ ```python
212
+ import torch
213
+ from multimolecule import RnaTokenizer, AidoRnaForTokenPrediction
214
+
215
+
216
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m")
217
+ model = AidoRnaForTokenPrediction.from_pretrained("multimolecule/aido.rna-650m")
218
+
219
+ text = "UAGCUUAUCAGACUGAUGUUG"
220
+ input = tokenizer(text, return_tensors="pt")
221
+ label = torch.randint(2, (len(text), ))
222
+
223
+ output = model(**input, labels=label)
224
+ ```
225
+
226
+ #### Contact Classification / Regression
227
+
228
+ > [!NOTE]
229
+ > This model is not fine-tuned for any specific task. You will need to fine-tune the model on a downstream task to use it for contact classification or regression.
230
+
231
+ Here is how to use this model as backbone to fine-tune for a contact-level task in PyTorch:
232
+
233
+ ```python
234
+ import torch
235
+ from multimolecule import RnaTokenizer, AidoRnaForContactPrediction
236
+
237
+
238
+ tokenizer = RnaTokenizer.from_pretrained("multimolecule/aido.rna-650m")
239
+ model = AidoRnaForContactPrediction.from_pretrained("multimolecule/aido.rna-650m")
240
+
241
+ text = "UAGCUUAUCAGACUGAUGUUG"
242
+ input = tokenizer(text, return_tensors="pt")
243
+ label = torch.randint(2, (len(text), len(text)))
244
+
245
+ output = model(**input, labels=label)
246
+ ```
247
+
248
+ ## Training Details
249
+
250
+ AIDO.RNA used Masked Language Modeling (MLM) as the pre-training objective: taking a sequence, the model randomly masks 15% of the tokens in the input then runs the entire masked sentence through the model and has to predict the masked tokens. This is comparable to the Cloze task in language modeling.
251
+
252
+ ### Training Data
253
+
254
+ The AIDO.RNA model was pre-trained on [RNAcentral](https://multimolecule.danling.org/datasets/rnacentral) and [MARS](https://ngdc.cncb.ac.cn/omix/release/OMIX003037).
255
+ RNAcentral is a free, public resource that offers integrated access to a comprehensive and up-to-date set of non-coding RNA sequences provided by a collaborating group of [Expert Databases](https://rnacentral.org/expert-databases) representing a broad range of organisms and RNA types.
256
+
257
+ AIDO.RNA applied SeqKit to remove duplicated sequences in the RNAcentral, resulting 42 million unique sequences.
258
+
259
+ Note that AIDO.RNA identifies `U` and `T` as different tokens, which is not supported by MultiMolecule. During model conversion, the embeddings of `T` is discarded. This means that the model will not be able to distinguish between `U` and `T` in the input sequences.
260
+
261
+ ### Training Procedure
262
+
263
+ #### Preprocessing
264
+
265
+ AIDO.RNA used masked language modeling (MLM) as the pre-training objective. The masking procedure is similar to the one used in BERT:
266
+
267
+ - 15% of the tokens are masked.
268
+ - In 80% of the cases, the masked tokens are replaced by `<mask>`.
269
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
270
+ - In the 10% remaining cases, the masked tokens are left as is.
271
+
272
+ #### Pre-training
273
+
274
+ - Epochs: 6
275
+ - Optimizer: AdamW
276
+ - Learning rate: 5e-5
277
+ - Learning rate warm-up: 2,000 steps
278
+ - Learning rate scheduler: Cosine
279
+ - Minimum learning rate: 1e-5
280
+ - Weight decay: 0.01
281
+
282
+ ## Citation
283
+
284
+ **BibTeX**:
285
+
286
+ ```bibtex
287
+ @article {Zou2024.11.28.625345,
288
+ author = {Zou, Shuxian and Tao, Tianhua and Mahbub, Sazan and Ellington, Caleb N. and Algayres, Robin and Li, Dian and Zhuang, Yonghao and Wang, Hongyi and Song, Le and Xing, Eric P.},
289
+ title = {A Large-Scale Foundation Model for RNA Function and Structure Prediction},
290
+ elocation-id = {2024.11.28.625345},
291
+ year = {2024},
292
+ doi = {10.1101/2024.11.28.625345},
293
+ publisher = {Cold Spring Harbor Laboratory},
294
+ abstract = {Originally marginalized as an intermediate in the information flow from DNA to protein, RNA has become the star of modern biology, holding the key to precision therapeutics, genetic engineering, evolutionary origins, and our understanding of fundamental cellular processes. Yet RNA is as mysterious as it is prolific, serving as an information store, a messenger, and a catalyst, spanning many underchar-acterized functional and structural classes. Deciphering the language of RNA is important not only for a mechanistic understanding of its biological functions but also for accelerating drug design. Toward this goal, we introduce AIDO.RNA, a pre-trained module for RNA in an AI-driven Digital Organism [1]. AIDO.RNA contains a scale of 1.6 billion parameters, trained on 42 million non-coding RNA (ncRNA) sequences at single-nucleotide resolution, and it achieves state-of-the-art performance on a comprehensive set of tasks, including structure prediction, genetic regulation, molecular function across species, and RNA sequence design. AIDO.RNA after domain adaptation learns to model essential parts of protein translation that protein language models, which have received widespread attention in recent years, do not. More broadly, AIDO.RNA hints at the generality of biological sequence modeling and the ability to leverage the central dogma to improve many biomolecular representations. Models and code are available through ModelGenerator in https://github.com/genbio-ai/AIDO and on Hugging Face.Competing Interest StatementThe authors have declared no competing interest.},
295
+ URL = {https://www.biorxiv.org/content/early/2024/11/29/2024.11.28.625345},
296
+ eprint = {https://www.biorxiv.org/content/early/2024/11/29/2024.11.28.625345.full.pdf},
297
+ journal = {bioRxiv}
298
+ }
299
+ ```
300
+
301
+ ## Contact
302
+
303
+ Please use GitHub issues of [MultiMolecule](https://github.com/DLS5-Omics/multimolecule/issues) for any questions or comments on the model card.
304
+
305
+ Please contact the authors of the [AIDO.RNA paper](https://doi.org/10.1101/2024.11.28.625345) for questions or comments on the paper/model.
306
+
307
+ ## License
308
+
309
+ This model is licensed under the [AGPL-3.0 License](https://www.gnu.org/licenses/agpl-3.0.html).
310
+
311
+ ```spdx
312
+ SPDX-License-Identifier: AGPL-3.0-or-later
313
+ ```
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "AidoRnaForPreTraining"
4
+ ],
5
+ "attention_dropout": 0.0,
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "head": null,
9
+ "hidden_act": "silu",
10
+ "hidden_dropout": 0.0,
11
+ "hidden_size": 1280,
12
+ "id2label": null,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3392,
15
+ "label2id": null,
16
+ "layer_norm_eps": 1e-05,
17
+ "lm_head": null,
18
+ "mask_token_id": 4,
19
+ "max_position_embeddings": 1024,
20
+ "model_type": "aido.rna",
21
+ "null_token_id": 5,
22
+ "num_attention_heads": 20,
23
+ "num_hidden_layers": 33,
24
+ "num_labels": 1,
25
+ "pad_token_id": 0,
26
+ "position_embedding_type": "rotary",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.50.0",
29
+ "unk_token_id": 3,
30
+ "use_cache": true,
31
+ "vocab_size": 26
32
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fc61367fa16ea8720b5396706c892f6429080015dc7486bfffad325df4786e0
3
+ size 2593614752
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:456ab4a5f9b0f617200128e98e4095d5dec68e82e359d362c0aa6cc71dc93bd4
3
+ size 2593740722
special_tokens_map.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<null>"
4
+ ],
5
+ "bos_token": "<cls>",
6
+ "cls_token": "<cls>",
7
+ "eos_token": "<eos>",
8
+ "mask_token": "<mask>",
9
+ "pad_token": "<pad>",
10
+ "sep_token": "<eos>",
11
+ "unk_token": "<unk>"
12
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<pad>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<cls>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "<eos>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "<mask>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "5": {
44
+ "content": "<null>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "additional_special_tokens": [
53
+ "<null>"
54
+ ],
55
+ "bos_token": "<cls>",
56
+ "clean_up_tokenization_spaces": true,
57
+ "cls_token": "<cls>",
58
+ "codon": false,
59
+ "eos_token": "<eos>",
60
+ "extra_special_tokens": {},
61
+ "mask_token": "<mask>",
62
+ "model_max_length": 1000000000000000019884624838656,
63
+ "nmers": 1,
64
+ "pad_token": "<pad>",
65
+ "replace_T_with_U": true,
66
+ "sep_token": "<eos>",
67
+ "tokenizer_class": "RnaTokenizer",
68
+ "unk_token": "<unk>"
69
+ }
vocab.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <pad>
2
+ <cls>
3
+ <eos>
4
+ <unk>
5
+ <mask>
6
+ <null>
7
+ A
8
+ C
9
+ G
10
+ U
11
+ N
12
+ R
13
+ Y
14
+ S
15
+ W
16
+ K
17
+ M
18
+ B
19
+ D
20
+ H
21
+ V
22
+ .
23
+ X
24
+ *
25
+ -
26
+ I