Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,8 @@ Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and
|
|
18 |
|
19 |
# Data
|
20 |
One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training.
|
21 |
-
|
|
|
22 |
- `B2NERD` (Recommended): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It has standardized entity definitions and pruned diverse data.
|
23 |
- `B2NERD_all`: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not go through any data selection or pruning.
|
24 |
- `B2NERD_raw`: Raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.
|
|
|
18 |
|
19 |
# Data
|
20 |
One of the paper's core contribution is the construction of B2NERD dataset. It's a cohesive and efficient collection refined from 54 English and Chinese datasets and designed for Open NER model training.
|
21 |
+
|
22 |
+
In downloaded file, we provide 3 versions of our dataset.
|
23 |
- `B2NERD` (Recommended): Contain ~52k samples from 54 Chinese or English datasets. This is the final version of our dataset suitable for out-of-domain / zero-shot NER model training. It has standardized entity definitions and pruned diverse data.
|
24 |
- `B2NERD_all`: Contain ~1.4M samples from 54 datasets. The full-data version of our dataset suitable for in-domain supervised evaluation. It has standardized entity definitions but does not go through any data selection or pruning.
|
25 |
- `B2NERD_raw`: Raw collected datasets with raw entity labels. It goes through basic format preprocessing but without further standardization.
|