model update
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ metrics:
|
|
| 6 |
- precision
|
| 7 |
- recall
|
| 8 |
model-index:
|
| 9 |
-
- name: tner/bert-large-tweetner7-
|
| 10 |
results:
|
| 11 |
- task:
|
| 12 |
name: Token Classification
|
|
@@ -76,7 +76,7 @@ widget:
|
|
| 76 |
- text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {{@Herbie Hancock@}} via {{USERNAME}} link below: {{URL}}"
|
| 77 |
example_title: "NER Example 1"
|
| 78 |
---
|
| 79 |
-
# tner/bert-large-tweetner7-
|
| 80 |
|
| 81 |
This model is a fine-tuned version of [tner/bert-large-tweetner-2020](https://huggingface.co/tner/bert-large-tweetner-2020) on the
|
| 82 |
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2021` split). The model is first fine-tuned on `train_2020`, and then continuously fine-tuned on `train_2021`.
|
|
@@ -108,8 +108,8 @@ For F1 scores, the confidence interval is obtained by bootstrap as below:
|
|
| 108 |
- 90%: [0.6231013705127983, 0.6413574593408826]
|
| 109 |
- 95%: [0.6217502353949177, 0.6428942705896876]
|
| 110 |
|
| 111 |
-
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-large-tweetner7-
|
| 112 |
-
and [metric file of entity span](https://huggingface.co/tner/bert-large-tweetner7-
|
| 113 |
|
| 114 |
### Usage
|
| 115 |
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
|
|
@@ -119,7 +119,7 @@ pip install tner
|
|
| 119 |
and activate model as below.
|
| 120 |
```python
|
| 121 |
from tner import TransformersNER
|
| 122 |
-
model = TransformersNER("tner/bert-large-tweetner7-
|
| 123 |
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
|
| 124 |
```
|
| 125 |
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
|
|
@@ -143,7 +143,7 @@ The following hyperparameters were used during training:
|
|
| 143 |
- lr_warmup_step_ratio: 0.3
|
| 144 |
- max_grad_norm: 1
|
| 145 |
|
| 146 |
-
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-large-tweetner7-
|
| 147 |
|
| 148 |
### Reference
|
| 149 |
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
|
|
|
|
| 6 |
- precision
|
| 7 |
- recall
|
| 8 |
model-index:
|
| 9 |
+
- name: tner/bert-large-tweetner7-continuous
|
| 10 |
results:
|
| 11 |
- task:
|
| 12 |
name: Token Classification
|
|
|
|
| 76 |
- text: "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {{@Herbie Hancock@}} via {{USERNAME}} link below: {{URL}}"
|
| 77 |
example_title: "NER Example 1"
|
| 78 |
---
|
| 79 |
+
# tner/bert-large-tweetner7-continuous
|
| 80 |
|
| 81 |
This model is a fine-tuned version of [tner/bert-large-tweetner-2020](https://huggingface.co/tner/bert-large-tweetner-2020) on the
|
| 82 |
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2021` split). The model is first fine-tuned on `train_2020`, and then continuously fine-tuned on `train_2021`.
|
|
|
|
| 108 |
- 90%: [0.6231013705127983, 0.6413574593408826]
|
| 109 |
- 95%: [0.6217502353949177, 0.6428942705896876]
|
| 110 |
|
| 111 |
+
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/bert-large-tweetner7-continuous/raw/main/eval/metric.json)
|
| 112 |
+
and [metric file of entity span](https://huggingface.co/tner/bert-large-tweetner7-continuous/raw/main/eval/metric_span.json).
|
| 113 |
|
| 114 |
### Usage
|
| 115 |
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
|
|
|
|
| 119 |
and activate model as below.
|
| 120 |
```python
|
| 121 |
from tner import TransformersNER
|
| 122 |
+
model = TransformersNER("tner/bert-large-tweetner7-continuous")
|
| 123 |
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
|
| 124 |
```
|
| 125 |
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
|
|
|
|
| 143 |
- lr_warmup_step_ratio: 0.3
|
| 144 |
- max_grad_norm: 1
|
| 145 |
|
| 146 |
+
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/bert-large-tweetner7-continuous/raw/main/trainer_config.json).
|
| 147 |
|
| 148 |
### Reference
|
| 149 |
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
|