Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
bys0318 commited on
Commit
2b48e49
β€’
1 Parent(s): b0db490

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -8
README.md CHANGED
@@ -20,7 +20,7 @@ license: apache-2.0
20
 
21
  πŸ’» Github Repo: https://github.com/THUDM/LongBench
22
 
23
- πŸ“š Arxiv Paper: https://arxiv.org/pdf/2308.14508.pdf
24
 
25
  LongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.
26
 
@@ -69,12 +69,10 @@ This repository provides data download for LongBench v2. If you wish to use this
69
 
70
  # Citation
71
  ```
72
- @misc{bai2023longbench,
73
- title={LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding},
74
- author={Yushi Bai and Xin Lv and Jiajie Zhang and Hongchang Lyu and Jiankai Tang and Zhidian Huang and Zhengxiao Du and Xiao Liu and Aohan Zeng and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
75
- year={2023},
76
- eprint={2308.14508},
77
- archivePrefix={arXiv},
78
- primaryClass={cs.CL}
79
  }
80
  ```
 
20
 
21
  πŸ’» Github Repo: https://github.com/THUDM/LongBench
22
 
23
+ πŸ“š Arxiv Paper: https://arxiv.org/abs/2412.15204
24
 
25
  LongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.
26
 
 
69
 
70
  # Citation
71
  ```
72
+ @article{bai2024longbench2,
73
+ title={LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks},
74
+ author={Yushi Bai and Shangqing Tu and Jiajie Zhang and Hao Peng and Xiaozhi Wang and Xin Lv and Shulin Cao and Jiazheng Xu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
75
+ journal={arXiv preprint arXiv:2412.15204},
76
+ year={2024}
 
 
77
  }
78
  ```