shoffman commited on
Commit
16076bf
1 Parent(s): abb341e

Upload README

Browse files
Files changed (2) hide show
  1. README.md +107 -0
  2. pipeline.jpeg +0 -0
README.md CHANGED
@@ -1,3 +1,110 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: feature-extraction
5
+ tags:
6
+ - chemistry
7
  ---
8
+
9
+ # MoLFormer-XL-both-10%
10
+
11
+ MoLFormer is a class of models pretrained on SMILES string representations of up to 1.1B molecules from ZINC and PubChem.
12
+ This repository is for the model pretrained on 10% of both datasets.
13
+
14
+ It was introduced in the paper [Large-Scale Chemical Language Representations Capture Molecular Structure and Properties](https://arxiv.org/abs/2106.09553) by Ross et al. and first released in [this repository](https://github.com/IBM/molformer).
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ MoLFormer is a large-scale chemical language model designed with the intention of learning a model trained on small molecules which are represented as SMILES strings. MoLFormer leverges masked language modeling and employs a linear attention Transformer combined with rotary embeddings.
21
+
22
+ ![MoLFormer pipeline](pipeline.jpeg)
23
+
24
+ An overview of the MoLFormer pipeline is seen in the image above. One can see that the transformer-based neural network model is trained on a large collection of chemical molecules represented by SMILES sequences from two public chemical datasets PubChem and ZINC in a self-supervised fashion. The MoLFormer architecture was designed with an efficient linear attention mechanism and relative positional embeddings with the goal of learning a meaningful and compressed representation of chemical molecules. After training the MoLFormer foundation model was then adopted to different downstream molecular property prediction tasks via fine-tuning on task-specific data. To further test the representative power of MoLFormer, the MoLFormer encodings were used to recover molecular similarity, and analysis on the correspondence between the interatomic spatial distance and attention value for a given molecule was performed.
25
+
26
+ ## Intended use and limitations
27
+
28
+ You can use the model for masked language modeling, but it is mainly intended to be used as a feature extractor or to be fine-tuned for a prediction task. The "frozen" model embeddings may be used for similarity measurements, visualization, or training predictor models. The model may also be fine-tuned for sequence classification tasks (e.g., solubility, toxicity, etc.).
29
+
30
+ This model is not intended for molecule generation. It is also not tested for molecules larger than ~200 atoms (i.e., macromolecules). Furthermore, using invalid or noncanonical SMILES may result in worse performance.
31
+
32
+ ## Example code
33
+
34
+ Use the code below to get started with the model.
35
+
36
+ ```py
37
+ import torch
38
+ from transformers import AutoModel, AutoTokenizer
39
+
40
+ model = AutoModel.from_pretrained("ibm/MoLFormer-XL-both-10pct", deterministic_eval=True, trust_remote_code=True)
41
+ tokenizer = AutoTokenizer.from_pretrained("ibm/MoLFormer-XL-both-10pct", trust_remote_code=True)
42
+
43
+ smiles = ["Cn1c(=O)c2c(ncn2C)n(C)c1=O", "CC(=O)Oc1ccccc1C(=O)O"]
44
+ inputs = tokenizer(smiles, padding=True, return_tensors="pt")
45
+ with torch.no_grad():
46
+ outputs = model(**inputs)
47
+ outputs.pooler_output
48
+ ```
49
+
50
+ ## Training Details
51
+
52
+ ### Data
53
+
54
+ We trained MoLFormer-XL on a combination of molecules from the ZINC15 and PubChem datasets. This repository contains the version trained on 10% ZINC + 10% PubChem.
55
+
56
+ Molecules were canonicalized with RDKit prior to training and isomeric information was removed. Also, molecules longer than 202 tokens were dropped.
57
+
58
+ ### Hardware
59
+
60
+ - 16 x NVIDIA V100 GPUs
61
+
62
+ ## Evaluation
63
+
64
+ We evaluated MoLFormer by fine-tuning on 11 benchmark tasks from MoleculeNet. The tables below show the performance of different MoLFormer variants:
65
+
66
+ | | BBBP | HIV | BACE | SIDER | ClinTox | Tox21 |
67
+ |-------------------------|----------|----------|----------|----------|----------|----------|
68
+ | 10% ZINC + 10% PubChem | 91.5 | 81.3 | 86.6 | 68.9 | 94.6 | 84.5 |
69
+ | 10% ZINC + 100% PubChem | 92.2 | 79.2 | 86.3 | 69.0 | 94.7 | 84.5 |
70
+ | 100% ZINC | 89.9 | 78.4 | 87.7 | 66.8 | 82.2 | 83.2 |
71
+ | MoLFormer-Base | 90.9 | 77,7 | 82.8 | 64.8 | 61.3 | 43.1 |
72
+ | MoLFormer-XL | **93.7** | **82.2** | **88.2** | **69.0** | **94.8** | **84.7** |
73
+
74
+ | | QM9 | QM8 | ESOL | FreeSolv | Lipophilicity |
75
+ |-------------------------|------------|------------|--------|------------|---------------|
76
+ | 10% ZINC + 10% PubChem | 1.7754 | 0.0108 | 0.3295 | 0.2221 | 0.5472 |
77
+ | 10% ZINC + 100% PubChem | 1.9093 | **0.0102** | 0.2775 | **0.2050** | 0.5331 |
78
+ | 100% ZINC | 1.9403 | 0.0124 | 0.3023 | 0.2981 | 0.5440 |
79
+ | MoLFormer-Base | 2.2500 | 0.0111 | 0.2798 | 0.2596 | 0.6492 |
80
+ | MoLFormer-XL | **1.5984** | **0.0102** | 0.2787 | 0.2308 | **0.5298** |
81
+
82
+ We report AUROC for all classification tasks, average MAE for QM9/8, and RMSE for the remaining regression tasks.
83
+
84
+ ## Citation
85
+
86
+ ```
87
+ @article{10.1038/s42256-022-00580-7,
88
+ year = {2022},
89
+ title = {{Large-scale chemical language representations capture molecular structure and properties}},
90
+ author = {Ross, Jerret and Belgodere, Brian and Chenthamarakshan, Vijil and Padhi, Inkit and Mroueh, Youssef and Das, Payel},
91
+ journal = {Nature Machine Intelligence},
92
+ doi = {10.1038/s42256-022-00580-7},
93
+ pages = {1256--1264},
94
+ number = {12},
95
+ volume = {4}
96
+ }
97
+ ```
98
+
99
+ ```
100
+ @misc{https://doi.org/10.48550/arxiv.2106.09553,
101
+ doi = {10.48550/ARXIV.2106.09553},
102
+ url = {https://arxiv.org/abs/2106.09553},
103
+ author = {Ross, Jerret and Belgodere, Brian and Chenthamarakshan, Vijil and Padhi, Inkit and Mroueh, Youssef and Das, Payel},
104
+ keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), Biomolecules (q-bio.BM), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Biological sciences, FOS: Biological sciences},
105
+ title = {Large-Scale Chemical Language Representations Capture Molecular Structure and Properties},
106
+ publisher = {arXiv},
107
+ year = {2021},
108
+ copyright = {arXiv.org perpetual, non-exclusive license}
109
+ }
110
+ ```
pipeline.jpeg ADDED