B2NERD / README.md
Umean's picture
Update README.md
dc86fa7 verified
|
raw
history blame
2.77 kB
metadata
license: mit
language:
  - en
  - zh

B2NER

We present B2NERD, a cohesive and efficient dataset that can improve LLMs' generalization on the challenging Open NER task, refined from 54 existing English or Chinese datasets. Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.

Data

One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training.
We provide 3 versions of our dataset.

  • B2NERD (Recommended): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It has standardized entity definitions and pruned diverse data.
  • B2NERD_all: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not go through any data selection or pruning.
  • B2NERD_raw: Raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.

Below are the datasets statistics and source datasets for B2NERD dataset.

Split Lang. Datasets Types Num Raw Num
Train En 19 119 25,403 838,648
Zh 21 222 26,504 580,513
Total 40 341 51,907 1,419,161
Test En 7 85 - 6,466
Zh 7 60 - 14,257
Total 14 145 - 20,723

More datset information can be found in the Appendix of paper.

Cite

@article{yang2024beyond,
  title={Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition},
  author={Yang, Yuming and Zhao, Wantong and Huang, Caishuang and Ye, Junjie and Wang, Xiao and Zheng, Huiyuan and Nan, Yang and Wang, Yuran and Xu, Xueying and Huang, Kaixin and others},
  journal={arXiv preprint arXiv:2406.11192},
  year={2024}
}