giantfish-fly commited on
Commit
b070b19
·
verified ·
1 Parent(s): 216934a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +430 -3
README.md CHANGED
@@ -1,3 +1,430 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ task_categories:
6
+ - question-answering
7
+ tags:
8
+ - llm
9
+ - memory
10
+ - retrieval
11
+ - context interference
12
+ - long-context
13
+
14
+ configs:
15
+ - config_name: core
16
+ description: Randomized updates (keys shuffled across key–value pairs). Recommended as the primary/SOTA comparison setting. At the highest stress tier, all tested models (as of May 2025) fail to reliably recover the final value.
17
+ data_files:
18
+ - split: test
19
+ path: core.parquet
20
+
21
+ - config_name: sequential_additional
22
+ description: Non-randomized – clear and strict sequential blocks; prove short context(token=5k-8k) can already have a strong context interference for most LLMs. Even with this well formatted data, many model's the performance still drop rapidly.
23
+ data_files:
24
+ - split: test
25
+ path: sequential_additional.parquet
26
+
27
+
28
+
29
+ ---
30
+ ---
31
+ **Super easy task for humans** that **All SOTA-LLM fails** to retrive the correct answer from context. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
32
+ - ICML 2025 Long-Context Foundation Models Workshop Accepted.(https://arxiv.org/abs/2506.08184)
33
+ - Update: This dataset is integrated into Moonshot AI(KIMI)'s **internal benchmarking framework** for assessing ** tracking capacity and context interference in LLM/agents**.
34
+
35
+
36
+
37
+ Task:
38
+
39
+ Key1: Value_1
40
+ Key1: Value_2
41
+ ......
42
+ Key1: Value_N
43
+
44
+
45
+ Question:
46
+ ```
47
+ What is the current value (the last value) for Key1?
48
+ ```
49
+
50
+ Expected:
51
+ ```
52
+ The current value of Key1 is Value_N.
53
+ ```
54
+
55
+
56
+ ## Results:
57
+ ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans value_1 to value_N, and **as N increases**, the **answers skew** increasingly toward **value_1**.
58
+
59
+
60
+
61
+ ## Done
62
+ For Full analysis see below:
63
+
64
+
65
+ ## Note on dataset scale:
66
+ (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+ Context Interference
77
+ - ICML 2025 Long-Context Foundation Models Workshop Accepted.
78
+
79
+ A simple context interference evaluation.
80
+
81
+
82
+
83
+ ## TL;DR
84
+ We identify a task that is **super easy for humans** but where all LLMs—from early 0.1B to the most modern 600B+ (GPT-5, Grok-4, Gemini, DeepSeek, etc.)—consistently **fail in the Same Way**. This pinpoints the **core challenge of MRCR**
85
+
86
+
87
+ -Multi-round co-reference in Context Interference:
88
+
89
+ Classic long-context benchmarks often test retrieving a single "needle" from a massive "haystack." MRCR raises the bar by placing many similar needles in the same context, requiring models to select the correct item (up to 8 needles), and shows that all LLMs struggle with this task.
90
+
91
+ - PI-LLM paper: https://arxiv.org/abs/2506.08184
92
+ - OpenAI MRCR dataset: https://huggingface.co/datasets/openai/mrcr
93
+ - DeepMind MRCR (Gemini) paper: https://arxiv.org/pdf/2409.12640v2
94
+
95
+
96
+
97
+ ## Our test takes this one step further
98
+ If MRCR is "multiple needles in a haystack", we show the **haystack isn't necessary** to expose core retrieval failures. By isolating—and precisely controlling—the number of similar, co-referenced items (we repeatedly update the value of the same keys in key–value pairs), our paradigm directly measures how interference from up to 400 needles limits retrieval accuracy even without any "haystack" as background. LLMs cannot perform a simple task like "retrieving the last value" of each co-referenced item.
99
+
100
+ - We observe a clear log-linear decline in accuracy as the number of interfering updates grows (i.e., co-references increase).
101
+ - The effect holds across the transformer models we tested. See our paper for details and methodology.
102
+
103
+ - Our demo site: https://sites.google.com/view/cog4llm
104
+ - Our paper (ICML2025 Long-Context Workshop): https://arxiv.org/abs/2506.08184
105
+ - Mechanistic research is ongoing. The test is well-established in cognitive science, where it has been studied extensively to measure human **Working Memory capacity**.
106
+
107
+
108
+
109
+
110
+ ## Key–value update paradigm (what the model sees)
111
+ We present a classical key–value experiment: the same key is updated multiple times. The model is then asked to return the current (last) value for each key. This isolates co-reference interference without requiring extremely long distractor contexts.
112
+
113
+ Minimal example (1 keys, N updates each):
114
+ ```
115
+
116
+ Key1: Value_1
117
+ Key1: Value_2
118
+ ......
119
+ Key1: Value_N
120
+
121
+
122
+
123
+ Question:
124
+
125
+ What is the current value (the last value) for Key1?
126
+ ```
127
+
128
+ Expected:
129
+ ```
130
+ The current value of Key1 is Value_N.
131
+ ```
132
+
133
+
134
+ ## Results:
135
+ ALL tested SOTA LLMs **cannot reliably retrieve** Value_N. Distribution spans value_1 to value_N, and **as N increases**, the **answers skew** increasingly toward **value_1**.
136
+
137
+
138
+
139
+ ## Note on dataset scale:
140
+ (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
141
+
142
+
143
+
144
+ ## Why this is challenging for LLMs:
145
+ - Multiple co-references to the same key cause strong interference.
146
+
147
+ 1. As the number of updates per key (N) increases, LLMs **confuse earlier values** with the most recent one and fail to retrieve the last value. (Dataset column: exp_updates)
148
+ 2. We intentionally make the task to only retrieve the last value to keep searching difficulties low and to show all LLM are unable to keep track due to **context interference**.
149
+
150
+
151
+ ## On Randomization
152
+ We **RANDOMIZE** update order after generation to mimic unpredictable changes by interleaving updates across different keys (i.e., different keys’ updates occur back-to-back rather than in contiguous blocks). Counterintuitively, this often helps LLMs, since the final update usually lands near the end of the context. In the sequential setting, most smaller (less than ~600B) models lose track after only a few updates—even with 5–8k-token inputs.
153
+ See the **Sequntial /Original-Non-Random Mode** section at the end of this document, where many LLMs’ performance still **collapses** with only a **small amount of input (5–8k)**
154
+
155
+
156
+
157
+ ## Cognitive science connection: Proactive Interference (PI)
158
+ Our test adopts the **classic proactive** interference paradigm from cognitive science, a **foundational method** for studying **human working memory**. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
159
+
160
+ - Interestingly, humans are **also affected by these three dimensions**, but far less than LLMs. Humans consistently outperform even the latest and largest models on this task.”
161
+
162
+ See: https://sites.google.com/view/cog4llm
163
+
164
+ ## SAME Log-linear Decline of Accuracy for ALL SOTA LLMs tested(2019-2025)
165
+ - Humans: near-ceiling accuracy (99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
166
+ - LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
167
+
168
+
169
+ ## Full detail of 3 tests
170
+ This dataset consists of 2 additional dimensions of evaluation to show current LLMs' limits. Including SOTA models: GPT5, Grok4, DeepSeek, Gemini 2.5PRO, Mistral, Llama4...etc
171
+
172
+ - Experiment2. (Dataset column: exp_keys).
173
+ LLMs's capacity to resist interference and their accuracy to retrieve the last value decrease log-linearly as the number of concurrent keys(n_keys) grows.
174
+ This experiment fixes everything else and vary only n_keys. (Two sets of test are provided, one fix update to 350 and another fixed update to 125 as lower difficulty settings)
175
+
176
+
177
+ - Experiment3. (Dataset column: exp_valuelength).— This causes rapid decline across LLMs (GPT-5 and Grok-4 decline similarly to GPT-2).”
178
+ Retrieval accuracy also decreases log-linearly as value length grows.
179
+ This experiment fixes everything else, and vary only the value_length.
180
+ Two sets of tests are provided, one fix update to 20 and another fixed update per key to only 4 as low- difficulty settings
181
+
182
+ (As this test is too hard, only 4 updates per key make all LLMs fail to retrieve the last value—which we intentionally designed to keep the searching difficulty low. Retrieve other order of value has even lower performance)
183
+
184
+ ## One more things: Sequential / Non-Randomized Mode (Last but interesting)
185
+ In a separated dataset files (Dataset column: extra_exp_updates_randomoff)
186
+ This mode takes the exact format shown in this document, without randomization. We fix everything but vary only the update times just like in the above experiment, but turn randomize_mode off .(column: randomize_mode)
187
+ - This separate dataset consists of 46 of following blocks in a non-randomized order:
188
+
189
+
190
+
191
+ Key1: Value_1
192
+ Key1: Value_2
193
+ ......
194
+ Key1: Value_N
195
+
196
+
197
+ Key2: Value_1
198
+ Key2: Value_2
199
+ ......
200
+ Key2: Value_N
201
+
202
+ ....all the way to key46 block
203
+
204
+ Question:
205
+
206
+ What is the current value (the last value) for key1 key2....key46?
207
+
208
+
209
+ **Result**
210
+ - In this mode, **most Modern LLMs (all <600B) still confuse the last value with earlier value after only 50–100 updates** (fewer than 12–25k tokens, far less than any LLMs' context window).
211
+ - Models quickly confuse earlier values with the most recent one.
212
+ - This is the **original and most simple test**
213
+ - Performance for this mode is also **reported in our paper (Figure 4).**
214
+ - **Step-like failure pattern** in this sequential key–value update tests. Retrieval accuracy remains near-perfect as interfering information is added in strictly sequential order, until a model-specific threshold is reached—after which **performance drops rapidly to near-zero**.
215
+ -
216
+
217
+ # PI-LLM Dataset File List
218
+
219
+ This repository hosts the **PI-LLM** dataset.
220
+ Currently it includes two files:
221
+
222
+ - **core.parquet** → Main dataset (randomized updates). Recommended as the primary/SOTA comparison setting; All tested models fail to reliably retrieve the last value.
223
+ - **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
224
+
225
+
226
+ ## Quick Start - Evaluate Your Model
227
+
228
+ ```python
229
+ from huggingface_hub import hf_hub_download
230
+ import pandas as pd
231
+ from openai import OpenAI
232
+ import json
233
+ import tiktoken
234
+
235
+ # Set accordingly
236
+ MAX_CONTEXT_WINDOW = 1000000
237
+ MODEL = "" # or your preferred model
238
+
239
+ # Download the dataset
240
+ dataset = pd.read_parquet(
241
+ hf_hub_download(repo_id="giantfish-fly/pi-llm", filename="core.parquet", repo_type="dataset")
242
+ )
243
+
244
+ client = OpenAI()
245
+ enc = tiktoken.get_encoding("o200k_base")
246
+
247
+ def extract_pieces_response_to_dict(model_output, probe_target="current"):
248
+ """
249
+ Extract the dictionary of key-value pairs from the model output.
250
+ First extract using verbal language match, then using colon match.
251
+ Merge the two dictionaries, prioritizing keys from the verbal match.
252
+ """
253
+ import re
254
+
255
+ if len(model_output) == 0:
256
+ return None
257
+
258
+ if "error code" in model_output.lower():
259
+ return None
260
+
261
+ if model_output.startswith("error") or model_output.startswith("Error"):
262
+ return None
263
+
264
+ if (re.search(r'\berror\b', model_output, re.IGNORECASE)) and (len(model_output) < 680):
265
+ return None
266
+
267
+ # Remove backslashes and asterisks
268
+ model_output = re.sub(r'\\(?!n)', '', model_output)
269
+ model_output = re.sub(r'\*', '', model_output)
270
+
271
+ dict_verbal_match = _extract_verbal_matches(model_output, probe_target)
272
+ dict_colon_match = _extract_colon_matches(model_output)
273
+
274
+ dict_merged = dict_colon_match.copy()
275
+ dict_merged.update(dict_verbal_match)
276
+ dict_merged.pop("key", None)
277
+
278
+ return dict_merged
279
+
280
+ def _extract_verbal_matches(model_output, probe_target="current"):
281
+ """Extract key-value pairs using verbal patterns like 'The current value of X is Y'"""
282
+ import re
283
+
284
+ patterns = [
285
+ r"(?:the)?\s*(?:most recent|final|last|latest|current|up-to-date|asked|queried|specified)\s+(?:value|word|term)?(?:s)?(?:\s+\w+){0,1}\s+(?:with|for|of|to)?\s+(?:the )?(?:category|key)?\s*([\"'\[\<]?\w+(?:\s+\w+)?[\"'\]\>]?)\s+(?:is|was)(?:\s*:\s*)?\s+([\"'\[\<]?\w+(?:\s+\w+)?[\"'\]\>]?)(?=\n|[,.;:]|$)",
286
+ ]
287
+
288
+ dict_response = {}
289
+ for pattern in patterns:
290
+ matches = re.findall(pattern, model_output, re.IGNORECASE | re.DOTALL)
291
+ for match in matches:
292
+ if len(match) >= 2:
293
+ key, value = match[0], match[1]
294
+ key = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', key).strip()
295
+ value = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', value).strip()
296
+ if key and value:
297
+ dict_response[key] = value
298
+ return dict_response
299
+
300
+ def _extract_colon_matches(model_output):
301
+ """Extract key-value pairs using colon-separated patterns"""
302
+ import re
303
+
304
+ # Simple colon-based extraction
305
+ dict_response = {}
306
+ lines = model_output.split('\n')
307
+ for line in lines:
308
+ if ':' in line:
309
+ parts = line.split(':', 1)
310
+ if len(parts) == 2:
311
+ key = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', parts[0]).strip()
312
+ value = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', parts[1]).strip()
313
+ if key and value:
314
+ dict_response[key] = value
315
+ return dict_response
316
+
317
+ def grade_pi_response(response, answer_formatted):
318
+ """
319
+ Compute per-row accuracy for PI-LLM: fraction of tracked keys answered with the last value.
320
+ - Parses the ground truth JSON string (answer_formatted) into {key: last_value}.
321
+ - Parses model output into {key: value} using robust extractors.
322
+ - Returns (# of keys with exact value match) / (# of keys in ground truth).
323
+ """
324
+ try:
325
+ # Parse ground truth JSON
326
+ ground_truth = json.loads(answer_formatted)
327
+
328
+ # Extract key-value pairs from model response using parsing functions
329
+ response_dict = extract_pieces_response_to_dict(response, probe_target="current")
330
+ if not isinstance(ground_truth, dict) or ground_truth is None:
331
+ return 0.0
332
+ if not isinstance(response_dict, dict) or response_dict is None:
333
+ return 0.0
334
+
335
+ keys = list(ground_truth.keys())
336
+ if len(keys) == 0:
337
+ return 0.0
338
+ correct = sum(1 for k in keys if response_dict.get(k) == ground_truth.get(k))
339
+ return correct / len(keys)
340
+ except Exception as e:
341
+ return 0.0
342
+
343
+ def n_tokens(messages):
344
+ """Count tokens in messages."""
345
+ return sum([len(enc.encode(m["content"])) for m in messages])
346
+
347
+ # Evaluate your model
348
+ results = []
349
+ for index, row in dataset.iterrows():
350
+ messages = json.loads(row["prompt"])
351
+ if n_tokens(messages) > MAX_CONTEXT_WINDOW:
352
+ continue
353
+
354
+ completion = client.chat.completions.create(
355
+ model=MODEL,
356
+ messages=messages,
357
+ )
358
+ response = completion.choices[0].message.content
359
+ accuracy = grade_pi_response(response, row["answer_formatted"])
360
+ parsed = extract_pieces_response_to_dict(response, probe_target="current")
361
+
362
+ # Store result with experiment info and raw/parsed responses (useful for axes + error analysis)
363
+ results.append({
364
+ 'experiment': row['experiment'],
365
+ 'session_id': row['session_id'],
366
+ 'run_id': row.get('run_id', None),
367
+ 'accuracy': accuracy,
368
+ 'index': index,
369
+ 'response_text': response,
370
+ 'parsed_response': parsed,
371
+ })
372
+
373
+ print(f"Row {index} ({row['experiment']}, session {row['session_id']}): {accuracy}")
374
+
375
+ # Calculate accuracy by experiment
376
+ import pandas as pd
377
+ results_df = pd.DataFrame(results)
378
+
379
+ # Group by experiment and calculate mean accuracy
380
+ experiment_accuracy = results_df.groupby('experiment')['accuracy'].agg(['mean', 'count']).reset_index()
381
+ experiment_accuracy['accuracy_percent'] = experiment_accuracy['mean'] * 100
382
+
383
+ print("\n=== Accuracy by Experiment ===")
384
+ for _, row in experiment_accuracy.iterrows():
385
+ print(f"{row['experiment']}: {row['accuracy_percent']:.1f}% ({row['count']} samples)")
386
+
387
+ # Average across runs (e.g., 10 sessions via run_id)
388
+ if 'run_id' in results_df.columns:
389
+ # Mean accuracy per experiment per run, then average across runs
390
+ per_run = results_df.groupby(['experiment', 'run_id'])['accuracy'].mean().reset_index()
391
+ exp_avg = per_run.groupby('experiment')['accuracy'].mean().reset_index()
392
+ exp_avg['accuracy_percent'] = 100 * exp_avg['accuracy']
393
+ print("\n=== Experiment accuracy averaged across runs (run_id) ===")
394
+ for _, r in exp_avg.iterrows():
395
+ print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
396
+
397
+
398
+
399
+
400
+
401
+ ## References
402
+ -
403
+ - PI-LLM demo site: https://sites.google.com/view/cog4llm
404
+ - PI-LLM paper: https://arxiv.org/abs/2506.08184
405
+
406
+ @misc{wang2025unableforgetproactiveinterference,
407
+ title={Unable to Forget: Proactive Interference Reveals Working Memory Limits in LLMs Beyond Context Length},
408
+ author={Chupei Wang and Jiaqiu Vince Sun},
409
+ year={2025},
410
+ eprint={2506.08184},
411
+ archivePrefix={arXiv},
412
+ primaryClass={cs.CL},
413
+ url={https://arxiv.org/abs/2506.08184},
414
+ }
415
+ ```
416
+
417
+ We are an interdisciplinary group interested and probing the boundaries between human and machine intelligence.
418
+
419
+ Chupei Wang*
420
+ Bachelor, University of Virginia, Physics Department.
421
+
422
+ With a foundation in physics and philosophy—including a year at the University of Chicago Divinity School—Chupei explores where logic and mind meet their limits, probing how the edges of science and the humanities intersect. Chupei is driven by a curiosity about where cognitive architectures—biological and artificial—break down, and what these failures teach us about intelligence itself. Currently seeking Lab and Research.
423
+
424
425
+
426
+ Jiaqiu Vince Sun*
427
+ PhD Candidate, NYU Center for Neuroscience
428
+
429
+ A former professional architect turned neuroscientist, Jiaqiu draws on his background in spatial design, cognitive neuroscience, and philosophy of mind to investigate how memory emerges and diverges in brains and artificial systems. His primary focus lies in the higher-level functions of the brain, such as self-monitoring and control.
430