Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ license: apache-2.0
|
|
20 |
|
21 |
π» Github Repo: https://github.com/THUDM/LongBench
|
22 |
|
23 |
-
π Arxiv Paper: https://arxiv.org/
|
24 |
|
25 |
LongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.
|
26 |
|
@@ -69,12 +69,10 @@ This repository provides data download for LongBench v2. If you wish to use this
|
|
69 |
|
70 |
# Citation
|
71 |
```
|
72 |
-
@
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
archivePrefix={arXiv},
|
78 |
-
primaryClass={cs.CL}
|
79 |
}
|
80 |
```
|
|
|
20 |
|
21 |
π» Github Repo: https://github.com/THUDM/LongBench
|
22 |
|
23 |
+
π Arxiv Paper: https://arxiv.org/abs/2412.15204
|
24 |
|
25 |
LongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.
|
26 |
|
|
|
69 |
|
70 |
# Citation
|
71 |
```
|
72 |
+
@article{bai2024longbench2,
|
73 |
+
title={LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks},
|
74 |
+
author={Yushi Bai and Shangqing Tu and Jiajie Zhang and Hao Peng and Xiaozhi Wang and Xin Lv and Shulin Cao and Jiazheng Xu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
|
75 |
+
journal={arXiv preprint arXiv:2412.15204},
|
76 |
+
year={2024}
|
|
|
|
|
77 |
}
|
78 |
```
|