abdoelsayed commited on
Commit
7339fcf
·
verified ·
1 Parent(s): 7c3a8ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +197 -3
README.md CHANGED
@@ -1,3 +1,197 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-retrieval
5
+ language:
6
+ - en
7
+ tags:
8
+ - information-retrieval
9
+ - reranking
10
+ - temporal-evaluation
11
+ - benchmark
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---
15
+
16
+ # FutureQueryEval Dataset 🔍
17
+
18
+ ## Dataset Description
19
+
20
+ **FutureQueryEval** is a novel Information Retrieval (IR) benchmark designed to evaluate reranker performance on temporal novelty. It comprises **148 queries** with **2,938 query-document pairs** across **7 topical categories**, specifically created to test how well reranking models generalize to truly novel queries that were unseen during LLM pretraining.
21
+
22
+ ### Key Features
23
+
24
+ - **Zero Contamination**: All queries refer to events after April 2025
25
+ - **Human Annotated**: Created by 4 expert annotators with quality control
26
+ - **Diverse Domains**: Technology, Sports, Politics, Science, Health, Business, Entertainment
27
+ - **Real Events**: Based on actual news and developments, not synthetic data
28
+ - **Temporal Novelty**: First benchmark designed to test reranker generalization on post-training events
29
+
30
+ ## Dataset Statistics
31
+
32
+ | Metric | Value |
33
+ |--------|-------|
34
+ | Total Queries | 148 |
35
+ | Total Documents | 2,787 |
36
+ | Query-Document Pairs | 2,938 |
37
+ | Avg. Relevant Docs per Query | 6.54 |
38
+ | Languages | English |
39
+ | License | Apache-2.0 |
40
+
41
+ ## Category Distribution
42
+
43
+ | Category | Queries | Percentage |
44
+ |----------|---------|------------|
45
+ | **Technology** | 37 | 25.0% |
46
+ | **Sports** | 31 | 20.9% |
47
+ | **Science & Environment** | 20 | 13.5% |
48
+ | **Business & Finance** | 19 | 12.8% |
49
+ | **Health & Medicine** | 16 | 10.8% |
50
+ | **World News & Politics** | 14 | 9.5% |
51
+ | **Entertainment & Culture** | 11 | 7.4% |
52
+
53
+ ## Dataset Structure
54
+
55
+ The dataset consists of three main files:
56
+
57
+ ### Files
58
+
59
+ - **`queries.tsv`**: Contains the query information
60
+ - Columns: `query_id`, `query_text`, `category`
61
+ - **`corpus.tsv`**: Contains the document collection
62
+ - Columns: `doc_id`, `title`, `text`, `url`
63
+ - **`qrels.txt`**: Contains relevance judgments
64
+ - Format: `query_id 0 doc_id relevance_score`
65
+
66
+ ### Data Fields
67
+
68
+ #### Queries
69
+ - `query_id` (string): Unique identifier for each query
70
+ - `query_text` (string): The natural language query
71
+ - `category` (string): Topical category (Technology, Sports, etc.)
72
+
73
+ #### Corpus
74
+ - `doc_id` (string): Unique identifier for each document
75
+ - `title` (string): Document title
76
+ - `text` (string): Full document content
77
+ - `url` (string): Source URL of the document
78
+
79
+ #### Relevance Judgments (qrels)
80
+ - `query_id` (string): Query identifier
81
+ - `iteration` (int): Always 0 (standard TREC format)
82
+ - `doc_id` (string): Document identifier
83
+ - `relevance` (int): Relevance score (0-3, where 3 is highly relevant)
84
+
85
+ ## Example Queries
86
+
87
+ **🌍 World News & Politics:**
88
+ > "What specific actions has Egypt taken to support injured Palestinians from Gaza, as highlighted during the visit of Presidents El-Sisi and Macron to Al-Arish General Hospital?"
89
+
90
+ **⚽ Sports:**
91
+ > "Which teams qualified for the 2025 UEFA European Championship playoffs in June 2025?"
92
+
93
+ **💻 Technology:**
94
+ > "What are the key features of Apple's new Vision Pro 2 announced at WWDC 2025?"
95
+
96
+ ## Usage
97
+
98
+ ### Loading the Dataset
99
+
100
+ ```python
101
+ from datasets import load_dataset
102
+
103
+ # Load the dataset
104
+ dataset = load_dataset("abdoelsayed/FutureQueryEval")
105
+
106
+ # Access different splits
107
+ queries = dataset["queries"]
108
+ corpus = dataset["corpus"]
109
+ qrels = dataset["qrels"]
110
+
111
+ # Example: Get first query
112
+ print(f"Query: {queries[0]['query_text']}")
113
+ print(f"Category: {queries[0]['category']}")
114
+ ```
115
+
116
+ ### Evaluation Example
117
+
118
+ ```python
119
+ import pandas as pd
120
+
121
+ # Load relevance judgments
122
+ qrels_df = pd.read_csv("qrels.txt", sep=" ",
123
+ names=["query_id", "iteration", "doc_id", "relevance"])
124
+
125
+ # Filter for a specific query
126
+ query_rels = qrels_df[qrels_df["query_id"] == "FQ001"]
127
+ print(f"Relevant documents for query FQ001: {len(query_rels)}")
128
+ ```
129
+
130
+ ## Methodology
131
+
132
+ ### Data Collection Process
133
+
134
+ 1. **Source Selection**: Major news outlets, official sites, sports organizations
135
+ 2. **Temporal Filtering**: Events after April 2025 only
136
+ 3. **Query Creation**: Manual generation by domain experts
137
+ 4. **Novelty Validation**: Tested against GPT-4 knowledge cutoff
138
+ 5. **Quality Control**: Multi-annotator review with senior oversight
139
+
140
+ ### Annotation Guidelines
141
+
142
+ - **Highly Relevant (3)**: Document directly answers the query
143
+ - **Relevant (2)**: Document partially addresses the query
144
+ - **Marginally Relevant (1)**: Document mentions query topics but lacks detail
145
+ - **Not Relevant (0)**: Document does not address the query
146
+
147
+ ## Research Applications
148
+
149
+ This dataset is designed for:
150
+
151
+ - **Reranker Evaluation**: Testing generalization to novel content
152
+ - **Temporal IR Research**: Understanding time-sensitive retrieval challenges
153
+ - **Domain Robustness**: Evaluating cross-domain performance
154
+ - **Contamination Studies**: Clean evaluation on post-training data
155
+
156
+ ## Benchmark Results
157
+
158
+ Top performing methods on FutureQueryEval:
159
+
160
+ | Method | Type | NDCG@10 | Runtime (s) |
161
+ |--------|------|---------|-------------|
162
+ | Zephyr-7B | Listwise | **62.65** | 1,240 |
163
+ | MonoT5-3B | Pointwise | **60.75** | 486 |
164
+ | Flan-T5-XL | Setwise | **56.57** | 892 |
165
+
166
+ ## Dataset Updates
167
+
168
+ FutureQueryEval will be updated every 6 months with new queries about recent events to maintain temporal novelty:
169
+
170
+ - **Version 1.1** (December 2025): +100 queries from July-September 2025
171
+ - **Version 1.2** (June 2026): +100 queries from October 2025-March 2026
172
+
173
+ ## Citation
174
+
175
+ If you use FutureQueryEval in your research, please cite:
176
+
177
+ ```bibtex
178
+ @misc{abdallah2025good,
179
+ title={How Good are LLM-based Rerankers? An Empirical Analysis of State-of-the-Art Reranking Models},
180
+ author={Abdelrahman Abdallah and Bhawna Piryani and Jamshid Mozafari and Mohammed Ali and Adam Jatowt},
181
+ year={2025},
182
+ eprint={2508.16757},
183
+ archivePrefix={arXiv},
184
+ primaryClass={cs.CL}
185
+ }
186
+ ```
187
+
188
+ ## Contact
189
+
190
+ - **Authors**: Abdelrahman Abdallah, Bhawna Piryani
191
+ - **Institution**: University of Innsbruck
192
+ - **Paper**: [arXiv:2508.16757](https://arxiv.org/abs/2508.16757)
193
+ - **Code**: [GitHub Repository](https://github.com/DataScienceUIBK/llm-reranking-generalization-study)
194
+
195
+ ## License
196
+
197
+ This dataset is released under the Apache-2.0 License.