Umean commited on
Commit
dc86fa7
·
verified ·
1 Parent(s): 560bae4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -3
README.md CHANGED
@@ -1,3 +1,48 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - zh
6
+ ---
7
+
8
+ # B2NER
9
+
10
+ We present B2NERD, a cohesive and efficient dataset that can improve LLMs' generalization on the challenging Open NER task, refined from 54 existing English or Chinese datasets.
11
+ Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.
12
+
13
+ - 📖 Paper: [Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition](http://arxiv.org/abs/2406.11192)
14
+ - 🎮 Code Repo: https://github.com/UmeanNever/B2NER
15
+ - 📀 Data: See below data section. You can download from [HuggingFace](https://huggingface.co/datasets/Umean/B2NERD) or [Google Drive](https://drive.google.com/file/d/11Wt4RU48i06OruRca2q_MsgpylzNDdjN/view?
16
+ - 💾 Model (LoRA Adapters): On the way
17
+
18
+
19
+ # Data
20
+ One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training.
21
+ We provide 3 versions of our dataset.
22
+ - `B2NERD` (Recommended): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It has standardized entity definitions and pruned diverse data.
23
+ - `B2NERD_all`: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not go through any data selection or pruning.
24
+ - `B2NERD_raw`: Raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.
25
+
26
+
27
+ Below are the datasets statistics and source datasets for `B2NERD` dataset.
28
+
29
+ | Split | Lang. | Datasets | Types | Num | Raw Num |
30
+ |-------|-------|----------|-------|-----|---------|
31
+ | Train | En | 19 | 119 | 25,403 | 838,648 |
32
+ | | Zh | 21 | 222 | 26,504 | 580,513 |
33
+ | | Total | 40 | 341 | 51,907 | 1,419,161 |
34
+ | Test | En | 7 | 85 | - | 6,466 |
35
+ | | Zh | 7 | 60 | - | 14,257 |
36
+ | | Total | 14 | 145 | - | 20,723 |
37
+
38
+ More datset information can be found in the Appendix of paper.
39
+
40
+ # Cite
41
+ ```
42
+ @article{yang2024beyond,
43
+ title={Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition},
44
+ author={Yang, Yuming and Zhao, Wantong and Huang, Caishuang and Ye, Junjie and Wang, Xiao and Zheng, Huiyuan and Nan, Yang and Wang, Yuran and Xu, Xueying and Huang, Kaixin and others},
45
+ journal={arXiv preprint arXiv:2406.11192},
46
+ year={2024}
47
+ }
48
+ ```