mphi commited on
Commit
5e5f346
1 Parent(s): 75ca7f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ The EstBERT model is trained both on 128 and 512 sequence length of data. For tr
27
 
28
  ### Reference to cite
29
 
30
- (Tanvir et al, 2021)[https://aclanthology.org/2021.nodalida-main.2/]
31
 
32
  ### Why would I use?
33
  Overall EstBERT performs better in parts of speech (POS), name entity recognition (NER), rubric, and sentiment classification tasks compared to mBERT and XLM-RoBERTa. The comparative results can be found below;
 
27
 
28
  ### Reference to cite
29
 
30
+ [Tanvir et al 2021](https://aclanthology.org/2021.nodalida-main.2)
31
 
32
  ### Why would I use?
33
  Overall EstBERT performs better in parts of speech (POS), name entity recognition (NER), rubric, and sentiment classification tasks compared to mBERT and XLM-RoBERTa. The comparative results can be found below;