Commit
·
425b9a7
1
Parent(s):
8eff4b8
Update README.md
Browse files
README.md
CHANGED
@@ -27,12 +27,9 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
27 |
```python
|
28 |
|
29 |
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
30 |
-
|
31 |
>>> tokenizer = AutoTokenizer.from_pretrained("hassan4830/xlm-roberta-base-finetuned-urdu")
|
32 |
>>> model = AutoModelForSequenceClassification.from_pretrained("hassan4830/xlm-roberta-base-finetuned-urdu")
|
33 |
-
>>> text = "وہ ایک برا شخص ہے"
|
34 |
-
>>> pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, device = 0)
|
35 |
-
>>> pipe(text)
|
36 |
|
37 |
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
|
38 |
'score': 0.05292855575680733,
|
@@ -59,12 +56,11 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
59 |
Here is how to use this model to get the features of a given text in PyTorch:
|
60 |
|
61 |
```python
|
62 |
-
from transformers import
|
63 |
-
|
64 |
-
model =
|
65 |
-
|
66 |
-
|
67 |
-
output = model(**encoded_input)
|
68 |
```
|
69 |
|
70 |
and in TensorFlow:
|
|
|
27 |
```python
|
28 |
|
29 |
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
30 |
+
|
31 |
>>> tokenizer = AutoTokenizer.from_pretrained("hassan4830/xlm-roberta-base-finetuned-urdu")
|
32 |
>>> model = AutoModelForSequenceClassification.from_pretrained("hassan4830/xlm-roberta-base-finetuned-urdu")
|
|
|
|
|
|
|
33 |
|
34 |
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
|
35 |
'score': 0.05292855575680733,
|
|
|
56 |
Here is how to use this model to get the features of a given text in PyTorch:
|
57 |
|
58 |
```python
|
59 |
+
>>> from transformers import TextClassificationPipeline
|
60 |
+
>>> text = "وہ ایک برا شخص ہے"
|
61 |
+
>>> pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer, return_all_scores=True, device = 0)
|
62 |
+
>>> pipe(text)
|
63 |
+
|
|
|
64 |
```
|
65 |
|
66 |
and in TensorFlow:
|