giantfish-fly commited on
Commit
f6e6f15
·
verified ·
1 Parent(s): 45fff20

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +248 -3
README.md CHANGED
@@ -1,3 +1,248 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ ---
6
+ # PI-LLM: The Core Retrieval Challenge Behind MRCR
7
+ Multi-round co-reference interference in long context
8
+
9
+ Classic long-context benchmarks often test retrieving a single "needle" from a massive "haystack." MRCR raises the bar by placing many similar needles in the same context, requiring models to select the correct item (up to 8 needles), and shows that all LLMs struggle with this task.
10
+
11
+ - OpenAI MRCR dataset: https://huggingface.co/datasets/openai/mrcr
12
+ - DeepMind MRCR (Gemini) paper: https://arxiv.org/pdf/2409.12640v2
13
+
14
+
15
+ ## Our test takes this one step further
16
+ If MRCR is "multiple needles in a haystack", we show the haystack isn't necessary to expose core retrieval failures. By isolating—and precisely controlling—the number of similar, co-referenced items (we repeatedly update the value of the same keys in key–value pairs), our paradigm directly measures how interference of up to 400 needles limits retrieval accuracy even without any "haystack" as background. And LLMs cannot perform a simple task like "retrieving the last value" of each co-referenced item.
17
+
18
+ - We observe a clear log-linear decline in accuracy as the number of interfering updates grows (i.e., co-references increase).
19
+ - The effect holds across the transformer models we tested. See our paper for details and methodology.
20
+
21
+ - Our demo site: https://sites.google.com/view/cog4llm
22
+ - Our paper (ICML Long-Context Workshop): https://arxiv.org/abs/2506.08184
23
+
24
+ ## Key–value update paradigm (what the model sees)
25
+ We present a classical key–value experiment: the same key is updated multiple times. The model is then asked to return the current (last) value for each key. This isolates co-reference interference without requiring extremely long distractor contexts.
26
+
27
+ Minimal example (1 keys, N updates each):
28
+ ```
29
+
30
+ Key1: Value_1
31
+ Key1: Value_2
32
+ ......
33
+ Key1: Value_N
34
+
35
+
36
+
37
+ Question:
38
+
39
+ What is the current value (the last value) for key1?
40
+ ```
41
+
42
+ Expected:
43
+ ```
44
+ The current value of key1 is Value_N.
45
+
46
+
47
+ note on dataset scale:
48
+ (N from 1 to 400). We put up to 46 such groups (key1..key46) together and then ask the model to retrieve just the last value of each key. We make sure all values are different, so when the model replies, we know how far away the answer is from the correct answer.
49
+
50
+ results:
51
+ LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
52
+
53
+
54
+ ## Why this is challenging for LLMs
55
+ - Multiple co-references to the same key cause strong interference.
56
+
57
+ As N gets larger, LLMs increasingly confuse earlier values with the most recent one, and cannot retrieve the last value.
58
+
59
+ ## Cognitive science connection: Proactive Interference (PI)
60
+ Our test adopts the classic proactive interference paradigm from cognitive science, a foundational method for studying human working memory. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
61
+
62
+ See: https://sites.google.com/view/cog4llm
63
+
64
+ ## Results at a glance
65
+ - Humans: near-ceiling accuracy on this controlled task across conditions (see paper for protocol and exact numbers).
66
+ - LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
67
+
68
+ ## Quick Start - Evaluate Your Model
69
+
70
+ ```python
71
+ from huggingface_hub import hf_hub_download
72
+ import pandas as pd
73
+ from openai import OpenAI
74
+ import json
75
+ import tiktoken
76
+
77
+ # Set accordingly
78
+ MAX_CONTEXT_WINDOW = 1000000
79
+ MODEL = "" # or your preferred model
80
+
81
+ # Download the dataset
82
+ dataset = pd.read_parquet(
83
+ hf_hub_download(repo_id="giantfish-fly/pi-llm", filename="core.parquet", repo_type="dataset")
84
+ )
85
+
86
+ client = OpenAI()
87
+ enc = tiktoken.get_encoding("o200k_base")
88
+
89
+ def extract_pieces_response_to_dict(model_output, probe_target="current"):
90
+ """
91
+ Extract the dictionary of key-value pairs from the model output.
92
+ First extract using verbal language match, then using colon match.
93
+ Merge the two dictionaries, prioritizing keys from the verbal match.
94
+ """
95
+ import re
96
+
97
+ if len(model_output) == 0:
98
+ return None
99
+
100
+ if "error code" in model_output.lower():
101
+ return None
102
+
103
+ if model_output.startswith("error") or model_output.startswith("Error"):
104
+ return None
105
+
106
+ if (re.search(r'\berror\b', model_output, re.IGNORECASE)) and (len(model_output) < 680):
107
+ return None
108
+
109
+ # Remove backslashes and asterisks
110
+ model_output = re.sub(r'\\(?!n)', '', model_output)
111
+ model_output = re.sub(r'\*', '', model_output)
112
+
113
+ dict_verbal_match = _extract_verbal_matches(model_output, probe_target)
114
+ dict_colon_match = _extract_colon_matches(model_output)
115
+
116
+ dict_merged = dict_colon_match.copy()
117
+ dict_merged.update(dict_verbal_match)
118
+ dict_merged.pop("key", None)
119
+
120
+ return dict_merged
121
+
122
+ def _extract_verbal_matches(model_output, probe_target="current"):
123
+ """Extract key-value pairs using verbal patterns like 'The current value of X is Y'"""
124
+ import re
125
+
126
+ patterns = [
127
+ r"(?:the)?\s*(?:most recent|final|last|latest|current|up-to-date|asked|queried|specified)\s+(?:value|word|term)?(?:s)?(?:\s+\w+){0,1}\s+(?:with|for|of|to)?\s+(?:the )?(?:category|key)?\s*([\"'\[\<]?\w+(?:\s+\w+)?[\"'\]\>]?)\s+(?:is|was)(?:\s*:\s*)?\s+([\"'\[\<]?\w+(?:\s+\w+)?[\"'\]\>]?)(?=\n|[,.;:]|$)",
128
+ ]
129
+
130
+ dict_response = {}
131
+ for pattern in patterns:
132
+ matches = re.findall(pattern, model_output, re.IGNORECASE | re.DOTALL)
133
+ for match in matches:
134
+ if len(match) >= 2:
135
+ key, value = match[0], match[1]
136
+ key = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', key).strip()
137
+ value = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', value).strip()
138
+ if key and value:
139
+ dict_response[key] = value
140
+ return dict_response
141
+
142
+ def _extract_colon_matches(model_output):
143
+ """Extract key-value pairs using colon-separated patterns"""
144
+ import re
145
+
146
+ # Simple colon-based extraction
147
+ dict_response = {}
148
+ lines = model_output.split('\n')
149
+ for line in lines:
150
+ if ':' in line:
151
+ parts = line.split(':', 1)
152
+ if len(parts) == 2:
153
+ key = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', parts[0]).strip()
154
+ value = re.sub(r'[\*\'"""''\[\]\{\}\(\)\<\>]', '', parts[1]).strip()
155
+ if key and value:
156
+ dict_response[key] = value
157
+ return dict_response
158
+
159
+ def grade_pi_response(response, answer_formatted):
160
+ """
161
+ Compute per-row accuracy for PI-LLM: fraction of tracked keys answered with the last value.
162
+ - Parses the ground truth JSON string (answer_formatted) into {key: last_value}.
163
+ - Parses model output into {key: value} using robust extractors.
164
+ - Returns (# of keys with exact value match) / (# of keys in ground truth).
165
+ """
166
+ try:
167
+ # Parse ground truth JSON
168
+ ground_truth = json.loads(answer_formatted)
169
+
170
+ # Extract key-value pairs from model response using parsing functions
171
+ response_dict = extract_pieces_response_to_dict(response, probe_target="current")
172
+ if not isinstance(ground_truth, dict) or ground_truth is None:
173
+ return 0.0
174
+ if not isinstance(response_dict, dict) or response_dict is None:
175
+ return 0.0
176
+
177
+ keys = list(ground_truth.keys())
178
+ if len(keys) == 0:
179
+ return 0.0
180
+ correct = sum(1 for k in keys if response_dict.get(k) == ground_truth.get(k))
181
+ return correct / len(keys)
182
+ except Exception as e:
183
+ return 0.0
184
+
185
+ def n_tokens(messages):
186
+ """Count tokens in messages."""
187
+ return sum([len(enc.encode(m["content"])) for m in messages])
188
+
189
+ # Evaluate your model
190
+ results = []
191
+ for index, row in dataset.iterrows():
192
+ messages = json.loads(row["prompt"])
193
+ if n_tokens(messages) > MAX_CONTEXT_WINDOW:
194
+ continue
195
+
196
+ completion = client.chat.completions.create(
197
+ model=MODEL,
198
+ messages=messages,
199
+ )
200
+ response = completion.choices[0].message.content
201
+ accuracy = grade_pi_response(response, row["answer_formatted"])
202
+ parsed = extract_pieces_response_to_dict(response, probe_target="current")
203
+
204
+ # Store result with experiment info and raw/parsed responses (useful for axes + error analysis)
205
+ results.append({
206
+ 'experiment': row['experiment'],
207
+ 'session_id': row['session_id'],
208
+ 'run_id': row.get('run_id', None),
209
+ 'accuracy': accuracy,
210
+ 'index': index,
211
+ 'response_text': response,
212
+ 'parsed_response': parsed,
213
+ })
214
+
215
+ print(f"Row {index} ({row['experiment']}, session {row['session_id']}): {accuracy}")
216
+
217
+ # Calculate accuracy by experiment
218
+ import pandas as pd
219
+ results_df = pd.DataFrame(results)
220
+
221
+ # Group by experiment and calculate mean accuracy
222
+ experiment_accuracy = results_df.groupby('experiment')['accuracy'].agg(['mean', 'count']).reset_index()
223
+ experiment_accuracy['accuracy_percent'] = experiment_accuracy['mean'] * 100
224
+
225
+ print("\n=== Accuracy by Experiment ===")
226
+ for _, row in experiment_accuracy.iterrows():
227
+ print(f"{row['experiment']}: {row['accuracy_percent']:.1f}% ({row['count']} samples)")
228
+
229
+ # Average across runs (e.g., 10 sessions via run_id)
230
+ if 'run_id' in results_df.columns:
231
+ # Mean accuracy per experiment per run, then average across runs
232
+ per_run = results_df.groupby(['experiment', 'run_id'])['accuracy'].mean().reset_index()
233
+ exp_avg = per_run.groupby('experiment')['accuracy'].mean().reset_index()
234
+ exp_avg['accuracy_percent'] = 100 * exp_avg['accuracy']
235
+ print("\n=== Experiment accuracy averaged across runs (run_id) ===")
236
+ for _, r in exp_avg.iterrows():
237
+ print(f"{r['experiment']}: {r['accuracy_percent']:.1f}% (averaged over runs)")
238
+
239
+
240
+
241
+
242
+
243
+
244
+
245
+ ## References
246
+ -
247
+ - PI-LLM demo site: https://sites.google.com/view/cog4llm
248
+ - PI-LLM paper: https://arxiv.org/abs/2506.08184