Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ The EstBERT model is trained both on 128 and 512 sequence length of data. For tr
|
|
27 |
|
28 |
### Reference to cite
|
29 |
|
30 |
-
|
31 |
|
32 |
### Why would I use?
|
33 |
Overall EstBERT performs better in parts of speech (POS), name entity recognition (NER), rubric, and sentiment classification tasks compared to mBERT and XLM-RoBERTa. The comparative results can be found below;
|
|
|
27 |
|
28 |
### Reference to cite
|
29 |
|
30 |
+
[Tanvir et al 2021](https://aclanthology.org/2021.nodalida-main.2)
|
31 |
|
32 |
### Why would I use?
|
33 |
Overall EstBERT performs better in parts of speech (POS), name entity recognition (NER), rubric, and sentiment classification tasks compared to mBERT and XLM-RoBERTa. The comparative results can be found below;
|