Datasets:
update
Browse files- .gitattributes +9 -0
- README.md +124 -0
- dataset/de.jsonl +3 -0
- dataset/en.jsonl +3 -0
- dataset/es.jsonl +3 -0
- dataset/fr.jsonl +3 -0
- dataset/it.jsonl +3 -0
- dataset/nl.jsonl +3 -0
- dataset/pl.jsonl +3 -0
- dataset/pt.jsonl +3 -0
- dataset/ru.jsonl +3 -0
- multinerd.py +89 -0
.gitattributes
CHANGED
@@ -49,3 +49,12 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
49 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
50 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
51 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
50 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
51 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
52 |
+
dataset/it.jsonl filter=lfs diff=lfs merge=lfs -text
|
53 |
+
dataset/pt.jsonl filter=lfs diff=lfs merge=lfs -text
|
54 |
+
dataset/es.jsonl filter=lfs diff=lfs merge=lfs -text
|
55 |
+
dataset/fr.jsonl filter=lfs diff=lfs merge=lfs -text
|
56 |
+
dataset/nl.jsonl filter=lfs diff=lfs merge=lfs -text
|
57 |
+
dataset/pl.jsonl filter=lfs diff=lfs merge=lfs -text
|
58 |
+
dataset/ru.jsonl filter=lfs diff=lfs merge=lfs -text
|
59 |
+
dataset/de.jsonl filter=lfs diff=lfs merge=lfs -text
|
60 |
+
dataset/en.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- de
|
4 |
+
- en
|
5 |
+
- es
|
6 |
+
- fr
|
7 |
+
- it
|
8 |
+
- nl
|
9 |
+
- pl
|
10 |
+
- pt
|
11 |
+
- ru
|
12 |
+
multilinguality:
|
13 |
+
- multilingual
|
14 |
+
size_categories:
|
15 |
+
- 10K<100k
|
16 |
+
task_categories:
|
17 |
+
- token-classification
|
18 |
+
task_ids:
|
19 |
+
- named-entity-recognition
|
20 |
+
pretty_name: WikiNeural
|
21 |
+
---
|
22 |
+
|
23 |
+
# Dataset Card for "tner/wikineural"
|
24 |
+
|
25 |
+
## Dataset Description
|
26 |
+
|
27 |
+
- **Repository:** [T-NER](https://github.com/asahi417/tner)
|
28 |
+
- **Paper:** [https://aclanthology.org/2021.findings-emnlp.215/](https://aclanthology.org/2021.findings-emnlp.215/)
|
29 |
+
- **Dataset:** WikiNeural
|
30 |
+
- **Domain:** Wikipedia
|
31 |
+
- **Number of Entity:** 16
|
32 |
+
|
33 |
+
|
34 |
+
### Dataset Summary
|
35 |
+
WikiAnn NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
|
36 |
+
- Entity Types: `PER`, `LOC`, `ORG`, `ANIM`, `BIO`, `CEL`, `DIS`, `EVE`, `FOOD`, `INST`, `MEDIA`, `PLANT`, `MYTH`, `TIME`, `VEHI`, `MISC`
|
37 |
+
|
38 |
+
## Dataset Structure
|
39 |
+
|
40 |
+
### Data Instances
|
41 |
+
An example of `train` looks as follows.
|
42 |
+
|
43 |
+
```
|
44 |
+
{
|
45 |
+
'tokens': ['I', 'hate', 'the', 'words', 'chunder', ',', 'vomit', 'and', 'puke', '.', 'BUUH', '.'],
|
46 |
+
'tags': [6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6]
|
47 |
+
}
|
48 |
+
```
|
49 |
+
|
50 |
+
### Label ID
|
51 |
+
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wikineural/raw/main/dataset/label.json).
|
52 |
+
```python
|
53 |
+
{
|
54 |
+
"O": 0,
|
55 |
+
"B-PER": 1,
|
56 |
+
"I-PER": 2,
|
57 |
+
"B-LOC": 3,
|
58 |
+
"I-LOC": 4,
|
59 |
+
"B-ORG": 5,
|
60 |
+
"I-ORG": 6,
|
61 |
+
"B-ANIM": 7,
|
62 |
+
"I-ANIM": 8,
|
63 |
+
"B-BIO": 9,
|
64 |
+
"I-BIO": 10,
|
65 |
+
"B-CEL": 11,
|
66 |
+
"I-CEL": 12,
|
67 |
+
"B-DIS": 13,
|
68 |
+
"I-DIS": 14,
|
69 |
+
"B-EVE": 15,
|
70 |
+
"I-EVE": 16,
|
71 |
+
"B-FOOD": 17,
|
72 |
+
"I-FOOD": 18,
|
73 |
+
"B-INST": 19,
|
74 |
+
"I-INST": 20,
|
75 |
+
"B-MEDIA": 21,
|
76 |
+
"I-MEDIA": 22,
|
77 |
+
"B-PLANT": 23,
|
78 |
+
"I-PLANT": 24,
|
79 |
+
"B-MYTH": 25,
|
80 |
+
"I-MYTH": 26,
|
81 |
+
"B-TIME": 27,
|
82 |
+
"I-TIME": 28,
|
83 |
+
"B-VEHI": 29,
|
84 |
+
"I-VEHI": 30,
|
85 |
+
"B-MISC": 31,
|
86 |
+
"I-MISC": 32
|
87 |
+
}
|
88 |
+
```
|
89 |
+
|
90 |
+
### Data Splits
|
91 |
+
|
92 |
+
| language | train | validation | test |
|
93 |
+
|:-----------|--------:|-------------:|-------:|
|
94 |
+
| de | 98640 | 12330 | 12372 |
|
95 |
+
| en | 92720 | 11590 | 11597 |
|
96 |
+
| es | 76320 | 9540 | 9618 |
|
97 |
+
| fr | 100800 | 12600 | 12678 |
|
98 |
+
| it | 88400 | 11050 | 11069 |
|
99 |
+
| nl | 83680 | 10460 | 10547 |
|
100 |
+
| pl | 108160 | 13520 | 13585 |
|
101 |
+
| pt | 80560 | 10070 | 10160 |
|
102 |
+
| ru | 92320 | 11540 | 11580 |
|
103 |
+
|
104 |
+
### Citation Information
|
105 |
+
|
106 |
+
```
|
107 |
+
@inproceedings{tedeschi-etal-2021-wikineural-combined,
|
108 |
+
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
|
109 |
+
author = "Tedeschi, Simone and
|
110 |
+
Maiorca, Valentino and
|
111 |
+
Campolungo, Niccol{\`o} and
|
112 |
+
Cecconi, Francesco and
|
113 |
+
Navigli, Roberto",
|
114 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
|
115 |
+
month = nov,
|
116 |
+
year = "2021",
|
117 |
+
address = "Punta Cana, Dominican Republic",
|
118 |
+
publisher = "Association for Computational Linguistics",
|
119 |
+
url = "https://aclanthology.org/2021.findings-emnlp.215",
|
120 |
+
doi = "10.18653/v1/2021.findings-emnlp.215",
|
121 |
+
pages = "2521--2533",
|
122 |
+
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
|
123 |
+
}
|
124 |
+
```
|
dataset/de.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6131a64f666884691990333ecfc983940fbaf25eb584940c4c9640318bcef873
|
3 |
+
size 38217905
|
dataset/en.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5040b1e7a1dea31eeb315a46b7f7cfc4cb3ddceae489f495901392e2f1b0aad1
|
3 |
+
size 44663615
|
dataset/es.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f69709cabe8dc592434c1c998fd2a6e8c4dc77ba8a82c5db35e5403aa2eca7a0
|
3 |
+
size 54805232
|
dataset/fr.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d0c6514e478d65eb6b851c551bf5429b41858afa9e30c01bd21472e579aa1f17
|
3 |
+
size 55584951
|
dataset/it.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:08ab92f2ff04710d3d06ed14e48144ce9e06b6005f60f7464c60e0e246d0538e
|
3 |
+
size 59584208
|
dataset/nl.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:81a42e492e3b22e0cd38be687976cde0425ab2825979fff676b6c7ef6f7e414f
|
3 |
+
size 39621455
|
dataset/pl.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4ac94a2574933491091af59a848e5376a699de7d7b8241fce9d1b1210edaf855
|
3 |
+
size 44953474
|
dataset/pt.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d0266b95f3cb9867ad5a83e947ce7b7f9bd3a9edc605b5d3f09b3d5d341286b6
|
3 |
+
size 51433608
|
dataset/ru.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1621b8600571cbe9180c3c25becc01c41776192ff4a8691a961eb1bc49de1358
|
3 |
+
size 51908152
|
multinerd.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
""" NER dataset compiled by T-NER library https://github.com/asahi417/tner/tree/master/tner """
|
2 |
+
import json
|
3 |
+
from itertools import chain
|
4 |
+
import datasets
|
5 |
+
|
6 |
+
logger = datasets.logging.get_logger(__name__)
|
7 |
+
_DESCRIPTION = """[wikineural](https://aclanthology.org/2021.findings-emnlp.215/)"""
|
8 |
+
_NAME = "wikineural"
|
9 |
+
_VERSION = "1.0.0"
|
10 |
+
_CITATION = """
|
11 |
+
@inproceedings{tedeschi-etal-2021-wikineural-combined,
|
12 |
+
title = "{W}iki{NE}u{R}al: {C}ombined Neural and Knowledge-based Silver Data Creation for Multilingual {NER}",
|
13 |
+
author = "Tedeschi, Simone and
|
14 |
+
Maiorca, Valentino and
|
15 |
+
Campolungo, Niccol{\`o} and
|
16 |
+
Cecconi, Francesco and
|
17 |
+
Navigli, Roberto",
|
18 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
|
19 |
+
month = nov,
|
20 |
+
year = "2021",
|
21 |
+
address = "Punta Cana, Dominican Republic",
|
22 |
+
publisher = "Association for Computational Linguistics",
|
23 |
+
url = "https://aclanthology.org/2021.findings-emnlp.215",
|
24 |
+
doi = "10.18653/v1/2021.findings-emnlp.215",
|
25 |
+
pages = "2521--2533",
|
26 |
+
abstract = "Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",
|
27 |
+
}
|
28 |
+
"""
|
29 |
+
|
30 |
+
_HOME_PAGE = "https://github.com/asahi417/tner"
|
31 |
+
_URL = f'https://huggingface.co/datasets/tner/{_NAME}/resolve/main/dataset'
|
32 |
+
_LANGUAGE = ['de', 'en', 'es', 'fr', 'it', 'nl', 'pl', 'pt', 'ru']
|
33 |
+
_URLS = {
|
34 |
+
l: {
|
35 |
+
str(datasets.Split.TEST): [f'{_URL}/{l}/test.jsonl'],
|
36 |
+
str(datasets.Split.TRAIN): [f'{_URL}/{l}/train.jsonl'],
|
37 |
+
str(datasets.Split.VALIDATION): [f'{_URL}/{l}/dev.jsonl']
|
38 |
+
} for l in _LANGUAGE
|
39 |
+
}
|
40 |
+
|
41 |
+
|
42 |
+
class WikiNeuralConfig(datasets.BuilderConfig):
|
43 |
+
"""BuilderConfig"""
|
44 |
+
|
45 |
+
def __init__(self, **kwargs):
|
46 |
+
"""BuilderConfig.
|
47 |
+
|
48 |
+
Args:
|
49 |
+
**kwargs: keyword arguments forwarded to super.
|
50 |
+
"""
|
51 |
+
super(WikiNeuralConfig, self).__init__(**kwargs)
|
52 |
+
|
53 |
+
|
54 |
+
class WikiNeural(datasets.GeneratorBasedBuilder):
|
55 |
+
"""Dataset."""
|
56 |
+
|
57 |
+
BUILDER_CONFIGS = [
|
58 |
+
WikiNeuralConfig(name=l, version=datasets.Version(_VERSION), description=f"{_DESCRIPTION} (language: {l})") for l in _LANGUAGE
|
59 |
+
]
|
60 |
+
|
61 |
+
def _split_generators(self, dl_manager):
|
62 |
+
downloaded_file = dl_manager.download_and_extract(_URLS[self.config.name])
|
63 |
+
return [datasets.SplitGenerator(name=i, gen_kwargs={"filepaths": downloaded_file[str(i)]})
|
64 |
+
for i in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]]
|
65 |
+
|
66 |
+
def _generate_examples(self, filepaths):
|
67 |
+
_key = 0
|
68 |
+
for filepath in filepaths:
|
69 |
+
logger.info(f"generating examples from = {filepath}")
|
70 |
+
with open(filepath, encoding="utf-8") as f:
|
71 |
+
_list = [i for i in f.read().split('\n') if len(i) > 0]
|
72 |
+
for i in _list:
|
73 |
+
data = json.loads(i)
|
74 |
+
yield _key, data
|
75 |
+
_key += 1
|
76 |
+
|
77 |
+
def _info(self):
|
78 |
+
return datasets.DatasetInfo(
|
79 |
+
description=_DESCRIPTION,
|
80 |
+
features=datasets.Features(
|
81 |
+
{
|
82 |
+
"tokens": datasets.Sequence(datasets.Value("string")),
|
83 |
+
"tags": datasets.Sequence(datasets.Value("int32")),
|
84 |
+
}
|
85 |
+
),
|
86 |
+
supervised_keys=None,
|
87 |
+
homepage=_HOME_PAGE,
|
88 |
+
citation=_CITATION,
|
89 |
+
)
|