model update
Browse files- README.md +135 -0
- config.json +1 -1
- eval/metric.json +1 -0
- eval/metric_span.json +1 -0
- eval/prediction.validation.json +0 -0
- pytorch_model.bin +2 -2
- tokenizer_config.json +1 -1
- trainer_config.json +1 -0
README.md
ADDED
@@ -0,0 +1,135 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- ttc
|
4 |
+
metrics:
|
5 |
+
- f1
|
6 |
+
- precision
|
7 |
+
- recall
|
8 |
+
model-index:
|
9 |
+
- name: tner/roberta-large-ttc
|
10 |
+
results:
|
11 |
+
- task:
|
12 |
+
name: Token Classification
|
13 |
+
type: token-classification
|
14 |
+
dataset:
|
15 |
+
name: ttc
|
16 |
+
type: ttc
|
17 |
+
args: ttc
|
18 |
+
metrics:
|
19 |
+
- name: F1
|
20 |
+
type: f1
|
21 |
+
value: 0.8314534321624235
|
22 |
+
- name: Precision
|
23 |
+
type: precision
|
24 |
+
value: 0.8269230769230769
|
25 |
+
- name: Recall
|
26 |
+
type: recall
|
27 |
+
value: 0.8360337005832793
|
28 |
+
- name: F1 (macro)
|
29 |
+
type: f1_macro
|
30 |
+
value: 0.8317396497007042
|
31 |
+
- name: Precision (macro)
|
32 |
+
type: precision_macro
|
33 |
+
value: 0.8296690551538254
|
34 |
+
- name: Recall (macro)
|
35 |
+
type: recall_macro
|
36 |
+
value: 0.8340850231639706
|
37 |
+
- name: F1 (entity span)
|
38 |
+
type: f1_entity_span
|
39 |
+
value: 0.8739929100870126
|
40 |
+
- name: Precision (entity span)
|
41 |
+
type: precision_entity_span
|
42 |
+
value: 0.8692307692307693
|
43 |
+
- name: Recall (entity span)
|
44 |
+
type: recall_entity_span
|
45 |
+
value: 0.8788075178224238
|
46 |
+
|
47 |
+
pipeline_tag: token-classification
|
48 |
+
widget:
|
49 |
+
- text: "Jacob Collier is a Grammy awarded artist from England."
|
50 |
+
example_title: "NER Example 1"
|
51 |
+
---
|
52 |
+
# tner/roberta-large-ttc
|
53 |
+
|
54 |
+
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
|
55 |
+
[tner/ttc](https://huggingface.co/datasets/tner/ttc) dataset.
|
56 |
+
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
|
57 |
+
for more detail). It achieves the following results on the test set:
|
58 |
+
- F1 (micro): 0.8314534321624235
|
59 |
+
- Precision (micro): 0.8269230769230769
|
60 |
+
- Recall (micro): 0.8360337005832793
|
61 |
+
- F1 (macro): 0.8317396497007042
|
62 |
+
- Precision (macro): 0.8296690551538254
|
63 |
+
- Recall (macro): 0.8340850231639706
|
64 |
+
|
65 |
+
The per-entity breakdown of the F1 score on the test set are below:
|
66 |
+
- location: 0.7817403708987161
|
67 |
+
- organization: 0.7737656595431097
|
68 |
+
- person: 0.939712918660287
|
69 |
+
|
70 |
+
For F1 scores, the confidence interval is obtained by bootstrap as below:
|
71 |
+
- F1 (micro):
|
72 |
+
- 90%: [0.8153670265512099, 0.8476331336073506]
|
73 |
+
- 95%: [0.8126974643551524, 0.8505459585794019]
|
74 |
+
- F1 (macro):
|
75 |
+
- 90%: [0.8153670265512099, 0.8476331336073506]
|
76 |
+
- 95%: [0.8126974643551524, 0.8505459585794019]
|
77 |
+
|
78 |
+
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-ttc/raw/main/eval/metric.json)
|
79 |
+
and [metric file of entity span](https://huggingface.co/tner/roberta-large-ttc/raw/main/eval/metric_span.json).
|
80 |
+
|
81 |
+
### Usage
|
82 |
+
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
|
83 |
+
```shell
|
84 |
+
pip install tner
|
85 |
+
```
|
86 |
+
and activate model as below.
|
87 |
+
```python
|
88 |
+
from tner import TransformersNER
|
89 |
+
model = TransformersNER("tner/roberta-large-ttc")
|
90 |
+
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
|
91 |
+
```
|
92 |
+
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
|
93 |
+
|
94 |
+
### Training hyperparameters
|
95 |
+
|
96 |
+
The following hyperparameters were used during training:
|
97 |
+
- dataset: ['tner/ttc']
|
98 |
+
- dataset_split: train
|
99 |
+
- dataset_name: None
|
100 |
+
- local_dataset: None
|
101 |
+
- model: roberta-large
|
102 |
+
- crf: True
|
103 |
+
- max_length: 128
|
104 |
+
- epoch: 16
|
105 |
+
- batch_size: 64
|
106 |
+
- lr: 1e-05
|
107 |
+
- random_seed: 42
|
108 |
+
- gradient_accumulation_steps: 2
|
109 |
+
- weight_decay: None
|
110 |
+
- lr_warmup_step_ratio: 0.1
|
111 |
+
- max_grad_norm: None
|
112 |
+
|
113 |
+
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-ttc/raw/main/trainer_config.json).
|
114 |
+
|
115 |
+
### Reference
|
116 |
+
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
|
117 |
+
|
118 |
+
```
|
119 |
+
|
120 |
+
@inproceedings{ushio-camacho-collados-2021-ner,
|
121 |
+
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
|
122 |
+
author = "Ushio, Asahi and
|
123 |
+
Camacho-Collados, Jose",
|
124 |
+
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
|
125 |
+
month = apr,
|
126 |
+
year = "2021",
|
127 |
+
address = "Online",
|
128 |
+
publisher = "Association for Computational Linguistics",
|
129 |
+
url = "https://aclanthology.org/2021.eacl-demos.7",
|
130 |
+
doi = "10.18653/v1/2021.eacl-demos.7",
|
131 |
+
pages = "53--62",
|
132 |
+
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
|
133 |
+
}
|
134 |
+
|
135 |
+
```
|
config.json
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
{
|
2 |
-
"_name_or_path": "tner_ckpt/ttc_roberta_large/
|
3 |
"architectures": [
|
4 |
"RobertaForTokenClassification"
|
5 |
],
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "tner_ckpt/ttc_roberta_large/model_hyypuo/epoch_15",
|
3 |
"architectures": [
|
4 |
"RobertaForTokenClassification"
|
5 |
],
|
eval/metric.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"micro/f1": 0.8314534321624235, "micro/f1_ci": {"90": [0.8153670265512099, 0.8476331336073506], "95": [0.8126974643551524, 0.8505459585794019]}, "micro/recall": 0.8360337005832793, "micro/precision": 0.8269230769230769, "macro/f1": 0.8317396497007042, "macro/f1_ci": {"90": [0.8164152043391184, 0.8483857154851796], "95": [0.8121405650665432, 0.8514439162114872]}, "macro/recall": 0.8340850231639706, "macro/precision": 0.8296690551538254, "per_entity_metric": {"location": {"f1": 0.7817403708987161, "f1_ci": {"90": [0.7488828052379153, 0.8171473330846698], "95": [0.7417012065948236, 0.8221992266543787]}, "precision": 0.7806267806267806, "recall": 0.7828571428571428}, "organization": {"f1": 0.7737656595431097, "f1_ci": {"90": [0.7491889128022777, 0.8000146842878121], "95": [0.7428538612994967, 0.8038162886292103]}, "precision": 0.7586705202312138, "recall": 0.7894736842105263}, "person": {"f1": 0.939712918660287, "f1_ci": {"90": [0.9228016331473269, 0.9576607614300043], "95": [0.9193529240572251, 0.9603688809668725]}, "precision": 0.9497098646034816, "recall": 0.9299242424242424}}}
|
eval/metric_span.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"micro/f1": 0.8739929100870126, "micro/f1_ci": {"90": [0.8607175432025641, 0.8879493299842456], "95": [0.8581408590354063, 0.8902564058328115]}, "micro/recall": 0.8788075178224238, "micro/precision": 0.8692307692307693, "macro/f1": 0.8739929100870126, "macro/f1_ci": {"90": [0.8607175432025641, 0.8879493299842456], "95": [0.8581408590354063, 0.8902564058328115]}, "macro/recall": 0.8788075178224238, "macro/precision": 0.8692307692307693}
|
eval/prediction.validation.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c610c34480a6d26278ed2b6da33799d1a0bfe9eafb7df2078b2320b7f58e28a
|
3 |
+
size 1417405809
|
tokenizer_config.json
CHANGED
@@ -6,7 +6,7 @@
|
|
6 |
"errors": "replace",
|
7 |
"mask_token": "<mask>",
|
8 |
"model_max_length": 512,
|
9 |
-
"name_or_path": "tner_ckpt/ttc_roberta_large/
|
10 |
"pad_token": "<pad>",
|
11 |
"sep_token": "</s>",
|
12 |
"special_tokens_map_file": "tner_ckpt/ttc_roberta_large/model_hyypuo/epoch_5/special_tokens_map.json",
|
|
|
6 |
"errors": "replace",
|
7 |
"mask_token": "<mask>",
|
8 |
"model_max_length": 512,
|
9 |
+
"name_or_path": "tner_ckpt/ttc_roberta_large/model_hyypuo/epoch_15",
|
10 |
"pad_token": "<pad>",
|
11 |
"sep_token": "</s>",
|
12 |
"special_tokens_map_file": "tner_ckpt/ttc_roberta_large/model_hyypuo/epoch_5/special_tokens_map.json",
|
trainer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"dataset": ["tner/ttc"], "dataset_split": "train", "dataset_name": null, "local_dataset": null, "model": "roberta-large", "crf": true, "max_length": 128, "epoch": 16, "batch_size": 64, "lr": 1e-05, "random_seed": 42, "gradient_accumulation_steps": 2, "weight_decay": null, "lr_warmup_step_ratio": 0.1, "max_grad_norm": null}
|