Commit
·
8ba0b00
1
Parent(s):
2105e90
Update README.md
Browse files
README.md
CHANGED
@@ -33,9 +33,9 @@ Note that this model is primarily aimed at being fine-tuned on tasks that use th
|
|
33 |
You can use this model directly with a pipeline for masked language modeling:
|
34 |
|
35 |
```python
|
36 |
-
from transformers import
|
37 |
-
|
38 |
-
mlm_model
|
39 |
```
|
40 |
|
41 |
You can also fine-tun it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
|
@@ -43,7 +43,7 @@ You can also fine-tun it for SequenceClassification, SequentialSentenceClassific
|
|
43 |
```python
|
44 |
from transformers import AutoTokenizer, AutoModelforSequenceClassification
|
45 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
|
46 |
-
doc_classifier = AutoModelforSequenceClassification(model=
|
47 |
```
|
48 |
|
49 |
## Limitations and bias
|
@@ -96,7 +96,8 @@ The following hyperparameters were used during training:
|
|
96 |
- Tokenizers 0.11.6
|
97 |
|
98 |
|
99 |
-
##Citing
|
|
|
100 |
If you use HAT in your research, please cite [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/xxx)
|
101 |
|
102 |
```
|
|
|
33 |
You can use this model directly with a pipeline for masked language modeling:
|
34 |
|
35 |
```python
|
36 |
+
from transformers import AutoTokenizer, AutoModelforForMaskedLM
|
37 |
+
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
|
38 |
+
mlm_model = AutoModelforForMaskedLM(model="kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
|
39 |
```
|
40 |
|
41 |
You can also fine-tun it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
|
|
|
43 |
```python
|
44 |
from transformers import AutoTokenizer, AutoModelforSequenceClassification
|
45 |
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
|
46 |
+
doc_classifier = AutoModelforSequenceClassification(model="kiddothe2b/hierarchical-transformer-EC2-mini-1024", trust_remote_code=True)
|
47 |
```
|
48 |
|
49 |
## Limitations and bias
|
|
|
96 |
- Tokenizers 0.11.6
|
97 |
|
98 |
|
99 |
+
## Citing
|
100 |
+
|
101 |
If you use HAT in your research, please cite [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/xxx)
|
102 |
|
103 |
```
|