File size: 1,804 Bytes
9df57df 48f45b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: openrail
---
https://github.com/BeyonderXX/InstructUIE
# InstructUIE
Large language models have unlocked strong multi-task capabilities from reading instructive prompts.
However, recent studies have shown that existing large models still have difficulty with information extraction tasks.
For example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset, which is significantly lower than the state-of-the-art performance.
In this paper, we propose InstructUIE, a unified information extraction framework based on instruction tuning, which can uniformly model various information extraction tasks and capture the inter-task dependency.
To validate the proposed method, we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction datasets in a unified text-to-text format with expert-written instructions.
Experimental results demonstrate that our method achieves comparable performance to Bert in supervised settings and significantly outperforms the state-of-the-art and gpt3.5 in zero-shot settings.
## Data
Our models are trained and evaluated on **IE INSTRUCTIONS**.
You can download the data from [Baidu NetDisk](https://pan.baidu.com/s/1R0KqeyjPHrsGcPqsbsh1XA?from=init&pwd=ybkt) or [Google Drive](https://drive.google.com/file/d/1T-5IbocGka35I7X3CE6yKe5N_Xg2lVKT/view?usp=share_link).
## Citation
If you are using InstructUIE for your work, please kindly cite our paper:
```latex
@article{wang2023instructuie,
title={InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction},
author={Wang, Xiao and Zhou, Weikang and Zu, Can and Xia, Han and Chen, Tianze and Zhang, Yuansen and Zheng, Rui and Ye, Junjie and Zhang, Qi and Gui, Tao and others},
journal={arXiv preprint arXiv:2304.08085},
year={2023}
}
```
|