Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,28 @@
|
|
2 |
license: mit
|
3 |
language:
|
4 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
# PI-LLM: The Core Retrieval Challenge Behind MRCR
|
7 |
(ICML 2025 Long-Context Workshop Accepted)
|
@@ -128,10 +150,11 @@ What is the current value (the last value) for key1 key2....key46?
|
|
128 |
|
129 |
|
130 |
**Result**
|
131 |
-
- In this mode, **LLMs
|
132 |
- All models quickly confuse earlier values with the most recent one.
|
133 |
- This is the **original and most striking test**, but we present it separately since performance declines too quickly to allow meaningful ranking across models.
|
134 |
- Performance for this mode is also **reported in our paper (Figure 4).**
|
|
|
135 |
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human infailable.”**
|
136 |
|
137 |
|
@@ -348,6 +371,4 @@ Jiaqiu Vince Sun*
|
|
348 |
PhD Candidate, NYU Center for Neuroscience
|
349 |
|
350 |
A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
|
351 | |
352 |
-
|
353 |
-
|
|
|
2 |
license: mit
|
3 |
language:
|
4 |
- en
|
5 |
+
task_categories:
|
6 |
+
- question-answering
|
7 |
+
tags:
|
8 |
+
- llm
|
9 |
+
- memory
|
10 |
+
- retrieval
|
11 |
+
- context interference
|
12 |
+
- long-context
|
13 |
+
|
14 |
+
configs:
|
15 |
+
- config_name: core
|
16 |
+
description: Randomized (easier) – keys shuffled across groups to reduce interference; recommended for SOTA model comparison.
|
17 |
+
data_files:
|
18 |
+
- split: test
|
19 |
+
path: core.parquet
|
20 |
+
|
21 |
+
- config_name: ordered_hardmode
|
22 |
+
description: Non-randomized (harder) – strict sequential blocks; prove short context(token=3k-8k) can already have very strong context interference, best for stress tests and mechanistic analysis.
|
23 |
+
data_files:
|
24 |
+
- split: test
|
25 |
+
path: hardmode_ordered.parquet
|
26 |
+
---
|
27 |
---
|
28 |
# PI-LLM: The Core Retrieval Challenge Behind MRCR
|
29 |
(ICML 2025 Long-Context Workshop Accepted)
|
|
|
150 |
|
151 |
|
152 |
**Result**
|
153 |
+
- In this mode, **SOTA LLMs confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).
|
154 |
- All models quickly confuse earlier values with the most recent one.
|
155 |
- This is the **original and most striking test**, but we present it separately since performance declines too quickly to allow meaningful ranking across models.
|
156 |
- Performance for this mode is also **reported in our paper (Figure 4).**
|
157 |
+
- **Step-like failure pattern** in this sequential key–value update tests. Retrieval accuracy remains near-perfect as interfering information is added in strictly sequential order, until a model-specific threshold is reached—after which **performance drops rapidly to near-zero**.
|
158 |
- **This mode is the most striking, as it highlights a fundamental limitation in how LLMs process context—A task that is human infailable.”**
|
159 |
|
160 |
|
|
|
371 |
PhD Candidate, NYU Center for Neuroscience
|
372 |
|
373 |
A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
|
374 | |
|
|
|