File size: 789 Bytes
de9a4ff 053a766 e3e1454 053a766 b8eb90a 053a766 b8eb90a 053a766 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
language: en
---
# bert-base-cased for Advertisement Classification
This is bert-base-cased model trained on the binary dataset prepared for advertisement classification. This model is suitable for English.
<b>Labels</b>:
0 -> non-advertisement;
1 -> advertisement;
## Example of classification
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
text = 'Young Brad Pitt early in his career McDonalds Commercial'
encoded_input = tokenizer(text, return_tensors='pt').to('cuda')
output = model(**encoded_input)
scores = output[0][0].detach().to('cpu').numpy()
scores = softmax(scores)
prediction_class = np.argmax(scores)
print(prediction_class)
```
Output:
```
1
``` |