Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,9 @@ We present B2NERD, a cohesive and efficient dataset that can improve LLMs' gener
|
|
11 |
Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.
|
12 |
|
13 |
- ๐ Paper: [Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition](http://arxiv.org/abs/2406.11192)
|
14 |
-
- ๐ฎ Github Repo: https://github.com/UmeanNever/B2NER
|
15 |
- ๐ Data: See below data section. You can download from here (in "Files and versions" tab) or [Google Drive](https://drive.google.com/file/d/11Wt4RU48i06OruRca2q_MsgpylzNDdjN/view?usp=drive_link).
|
16 |
-
- ๐พ Model (LoRA Adapters):
|
17 |
|
18 |
See github repo for more information about data usage and this work.
|
19 |
|
@@ -39,10 +39,8 @@ Below are the datasets statistics and source datasets for `B2NERD` dataset.
|
|
39 |
| | Zh | 7 | 60 | - | 14,257 |
|
40 |
| | Total | 14 | 145 | - | 20,723 |
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/655c6b1abfb531437a54c0e6/9UuY9EuA7R5PvasddMObQ.png)
|
46 |
|
47 |
More information can be found in the Appendix of paper.
|
48 |
|
|
|
11 |
Our B2NER models, trained on B2NERD, outperform GPT-4 by 6.8-12.0 F1 points and surpass previous methods in 3 out-of-domain benchmarks across 15 datasets and 6 languages.
|
12 |
|
13 |
- ๐ Paper: [Beyond Boundaries: Learning a Universal Entity Taxonomy across Datasets and Languages for Open Named Entity Recognition](http://arxiv.org/abs/2406.11192)
|
14 |
+
- ๐ฎ Github Repo: https://github.com/UmeanNever/B2NER .
|
15 |
- ๐ Data: See below data section. You can download from here (in "Files and versions" tab) or [Google Drive](https://drive.google.com/file/d/11Wt4RU48i06OruRca2q_MsgpylzNDdjN/view?usp=drive_link).
|
16 |
+
- ๐พ Model (LoRA Adapters): See [7B model](https://huggingface.co/Umean/B2NER-Internlm2.5-7B-LoRA) and [20B model](https://huggingface.co/Umean/B2NER-Internlm2-20B-LoRA). You may refer to the github repo for quick demo usage.
|
17 |
|
18 |
See github repo for more information about data usage and this work.
|
19 |
|
|
|
39 |
| | Zh | 7 | 60 | - | 14,257 |
|
40 |
| | Total | 14 | 145 | - | 20,723 |
|
41 |
|
42 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/655c6b1abfb531437a54c0e6/NIQWzYvwRxbMVgJf1KDzL.png" width="1000"/>
|
43 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/655c6b1abfb531437a54c0e6/9UuY9EuA7R5PvasddMObQ.png" width="1000"/>
|
|
|
|
|
44 |
|
45 |
More information can be found in the Appendix of paper.
|
46 |
|