File size: 1,379 Bytes
00ba9ec 5f0626f 00ba9ec 8cf9edd 7aab81e 5f0626f 6966604 5f0626f 6966604 5f0626f 6966604 5f0626f 6966604 0e3bf51 5f0626f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
license: bsd-3-clause
tags:
- protein
- progen2
---
This is the one-directional model trained on 7 protein families:
- PF00002 - GPCRs
- PF00042 - Globins
- PF00125 - Core histones
- PF00127 - Copper binding proteins
- PF00257 - Dehydrins
- PF00262 - Calreticulins
- PF03668 - P-loop ATPase
Check out the [github repo](https://github.com/hugohrban/ProGen2-finetuning) for more information.
Example usage:
```python
from transformers import AutoModelForCausalLM
from tokenizers import Tokenizer
# optionally use local imports
# from models.progen.modeling_progen import ProGenForCausalLM
# from models.progen.configuration_progen import ProGenConfig
import torch
import torch.nn.functional as F
# load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("hugohrban/progen2-small-mix7", trust_remote_code=True)
tokenizer = Tokenizer.from_pretrained("hugohrban/progen2-small-mix7")
tokenizer.no_padding()
# prepare input
prompt = "<|pf03668|>1MEVVIVTGMSGAGK"
input_ids = torch.tensor(tokenizer.encode(prompt).ids).to(model.device)
# forward pass
logits = model(input_ids).logits
# print output probabilities
next_token_logits = logits[-1, :]
next_token_probs = F.softmax(next_token_logits, dim=-1)
for i in range(tokenizer.get_vocab_size(with_added_tokens=False)):
print(f"{tokenizer.id_to_token(i)}: {100 * next_token_probs[i].item():.2f} %")
``` |