JamesBegin commited on
Commit
9457a5e
Β·
verified Β·
1 Parent(s): b72d0e9

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +78 -0
  2. gitattributes +60 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - multiple-choice
4
+ - question-answering
5
+ - text-classification
6
+ - table-question-answering
7
+ language:
8
+ - en
9
+ tags:
10
+ - Long Context
11
+ - reasoning
12
+ size_categories:
13
+ - n<1K
14
+ license: apache-2.0
15
+ ---
16
+
17
+ # LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks
18
+
19
+ 🌐 Project Page: https://longbench2.github.io
20
+
21
+ πŸ’» Github Repo: https://github.com/THUDM/LongBench
22
+
23
+ πŸ“š Arxiv Paper: https://arxiv.org/abs/2412.15204
24
+
25
+ LongBench v2 is designed to assess the ability of LLMs to handle long-context problems requiring **deep understanding and reasoning** across real-world multitasks. LongBench v2 has the following features: (1) **Length**: Context length ranging from 8k to 2M words, with the majority under 128k. (2) **Difficulty**: Challenging enough that even human experts, using search tools within the document, cannot answer correctly in a short time. (3) **Coverage**: Cover various realistic scenarios. (4) **Reliability**: All in a multiple-choice question format for reliable evaluation.
26
+
27
+ To elaborate, LongBench v2 consists of 503 challenging multiple-choice questions, with contexts ranging from 8k to 2M words, across six major task categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code repo understanding, and long structured data understanding. To ensure the breadth and the practicality, we collect data from nearly 100 highly educated individuals with diverse professional backgrounds. We employ both automated and manual review processes to maintain high quality and difficulty, resulting in human experts achieving only 53.7% accuracy under a 15-minute time constraint. Our evaluation reveals that the best-performing model, when directly answers the questions, achieves only 50.1% accuracy. In contrast, the o1-preview model, which includes longer reasoning, achieves 57.7%, surpassing the human baseline by 4%. These results highlight the importance of **enhanced reasoning ability and scaling inference-time compute to tackle the long-context challenges in LongBench v2**.
28
+
29
+ **πŸ” With LongBench v2, we are eager to find out how scaling inference-time compute will affect deep understanding and reasoning in long-context scenarios. View our πŸ† leaderboard [here](https://longbench2.github.io/#leaderboard) (updating).**
30
+
31
+ # πŸ”¨ How to use it?
32
+
33
+ #### Loading Data
34
+
35
+ You can download and load the **LongBench v2** data through the Hugging Face datasets ([πŸ€— HF Repo](https://huggingface.co/datasets/THUDM/LongBench-v2)):
36
+ ```python
37
+ from datasets import load_dataset
38
+ dataset = load_dataset('THUDM/LongBench-v2', split='train')
39
+ ```
40
+ Alternatively, you can download the file from [this link](https://huggingface.co/datasets/THUDM/LongBench-v2/resolve/main/data.json) to load the data.
41
+
42
+ #### Data Format
43
+
44
+ All data in **LongBench v2** are standardized to the following format:
45
+
46
+ ```json
47
+ {
48
+ "_id": "Unique identifier for each piece of data",
49
+ "domain": "The primary domain category of the data",
50
+ "sub_domain": "The specific sub-domain category within the domain",
51
+ "difficulty": "The difficulty level of the task, either 'easy' or 'hard'",
52
+ "length": "The length category of the task, which can be 'short', 'medium', or 'long'",
53
+ "question": "The input/command for the task, usually short, such as questions in QA, queries in many-shot learning, etc",
54
+ "choice_A": "Option A", "choice_B": "Option B", "choice_C": "Option C", "choice_D": "Option D",
55
+ "answer": "The groundtruth answer, denoted as A, B, C, or D",
56
+ "context": "The long context required for the task, such as documents, books, code repositories, etc."
57
+ }
58
+ ```
59
+
60
+ #### Evaluation
61
+
62
+ This repository provides data download for LongBench v2. If you wish to use this dataset for automated evaluation, please refer to our [github](https://github.com/THUDM/LongBench).
63
+
64
+ # Dataset Statistics
65
+
66
+ <p align="left"><img width="60%" alt="data_instance" src="https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/6i10a4KKy5WS2xGAQ8h9E.png"></p>
67
+
68
+ <p align="left"><img width="70%" alt="data_instance" src="https://cdn-uploads.huggingface.co/production/uploads/64ed568ccf6118a9379a61b8/qWMf-xKmX17terdKxu9oa.png"></p>
69
+
70
+ # Citation
71
+ ```
72
+ @article{bai2024longbench2,
73
+ title={LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks},
74
+ author={Yushi Bai and Shangqing Tu and Jiajie Zhang and Hao Peng and Xiaozhi Wang and Xin Lv and Shulin Cao and Jiazheng Xu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
75
+ journal={arXiv preprint arXiv:2412.15204},
76
+ year={2024}
77
+ }
78
+ ```
gitattributes ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
60
+ data.json filter=lfs diff=lfs merge=lfs -text