Commit
·
22e3977
1
Parent(s):
0c43d7f
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,8 @@ It has been shown that pLMs like ESM-2 contain structural information in the att
|
|
33 |
and that single sequence masked language models like ESMFold can be used in atomically accurate predictions of folds, even outperforming
|
34 |
AlphaFold2. In our approach we show a positive correlation between scaling the model size and data
|
35 |
in a 1-to-1 fashion provides competative and possibly even SOTA performance, although our comparison to the SOTA models is not as fair and
|
36 |
-
comprehensive as it could be (see [this report for more details](https://api.wandb.ai/links/amelie-schreiber-math/0asqd3hs)
|
|
|
37 |
|
38 |
|
39 |
This model is a finetuned version of the 35M parameter `esm2_t12_35M_UR50D` ([see here](https://huggingface.co/facebook/esm2_t12_35M_UR50D)
|
|
|
33 |
and that single sequence masked language models like ESMFold can be used in atomically accurate predictions of folds, even outperforming
|
34 |
AlphaFold2. In our approach we show a positive correlation between scaling the model size and data
|
35 |
in a 1-to-1 fashion provides competative and possibly even SOTA performance, although our comparison to the SOTA models is not as fair and
|
36 |
+
comprehensive as it could be (see [this report for more details](https://api.wandb.ai/links/amelie-schreiber-math/0asqd3hs) and also
|
37 |
+
[this repost](https://wandb.ai/amelie-schreiber-math/huggingface/reports/ESM-2-Binding-Sites-Predictor-Part-3-Scaling-Results--Vmlldzo1NDA3Nzcy?accessToken=npsm0tatgumcidfwxubzjyuhal512xu8sjmpnf11sebktjm9mheg69ja397q57ok)).
|
38 |
|
39 |
|
40 |
This model is a finetuned version of the 35M parameter `esm2_t12_35M_UR50D` ([see here](https://huggingface.co/facebook/esm2_t12_35M_UR50D)
|