Add link to @lxyuan their model
Browse files
README.md
CHANGED
@@ -273,5 +273,8 @@ The following hyperparameters were used during training:
|
|
273 |
- Datasets 2.12.0
|
274 |
- Tokenizers 0.13.2
|
275 |
|
|
|
|
|
|
|
276 |
## Contributions
|
277 |
Many thanks to [Simone Tedeschi](https://huggingface.co/sted97) from [Babelscape](https://babelscape.com) for his insight when training this model and his involvement in the creation of the training dataset.
|
|
|
273 |
- Datasets 2.12.0
|
274 |
- Tokenizers 0.13.2
|
275 |
|
276 |
+
## See also
|
277 |
+
* [lxyuan/span-marker-bert-base-multilingual-cased-multinerd](https://huggingface.co/lxyuan/span-marker-bert-base-multilingual-cased-multinerd) is similar to this model, but trained on 3 epochs instead of 2. It reaches better performance on 7 out of the 10 languages.
|
278 |
+
|
279 |
## Contributions
|
280 |
Many thanks to [Simone Tedeschi](https://huggingface.co/sted97) from [Babelscape](https://babelscape.com) for his insight when training this model and his involvement in the creation of the training dataset.
|