giantfish-fly commited on
Commit
cca6aca
·
verified ·
1 Parent(s): ea0a3d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -55,18 +55,27 @@ results:
55
  LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
56
 
57
 
58
- ## Why this is challenging for LLMs
59
  - Multiple co-references to the same key cause strong interference.
60
 
61
- As N gets larger, LLMs increasingly confuse earlier values with the most recent one, and cannot retrieve the last value.
 
 
 
 
 
 
 
 
62
 
63
  ## Cognitive science connection: Proactive Interference (PI)
64
  Our test adopts the classic proactive interference paradigm from cognitive science, a foundational method for studying human working memory. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
 
65
 
66
  See: https://sites.google.com/view/cog4llm
67
 
68
  ## Results at a glance
69
- - Humans: near-ceiling accuracy on this controlled task across conditions (see paper for protocol and exact numbers).
70
  - LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
71
 
72
  ## Quick Start - Evaluate Your Model
 
55
  LLMs cannot reliably retrieve Value_N. Distribution spans value_1 to value_N, and as N increases, the answers skew increasingly toward value_1.
56
 
57
 
58
+ ## Why this is challenging for LLMs:
59
  - Multiple co-references to the same key cause strong interference.
60
 
61
+ As N(number of updated value) gets larger, LLMs increasingly confuse earlier values with the most recent one, and cannot retrieve the last value.
62
+ Experiment1.(check Dataset Column: exp_updates)
63
+
64
+ This data set consists two more dimension of evaluations to show current LLMs' limits. Including SOTA model GPT5, Grok4,DeepSeek,Gemini 2.5PRO,Mistrial,Llama4..etc
65
+ 2.As key_n grows, LLMs's capacity to resist interference and retrieve the last value also decrease log-linearly.
66
+ Experiment2.(Dataset Column: exp_keys)
67
+
68
+ 3.As length of value grows, LLMs's accuracy of retrieving also decrease log-linearly.
69
+ Experiment3.(Dataset Column: exp_valuelength)
70
 
71
  ## Cognitive science connection: Proactive Interference (PI)
72
  Our test adopts the classic proactive interference paradigm from cognitive science, a foundational method for studying human working memory. PI shows how older, similar information disrupts encoding and retrieval of newer content. Bringing this approach to LLMs allows us to directly measure how interference—not just context length—limits memory and retrieval.
73
+ Interestringly, humans are also affected by above three dimensions, but far less than LLM and outperform the lastest and Largest LLMs in this task.
74
 
75
  See: https://sites.google.com/view/cog4llm
76
 
77
  ## Results at a glance
78
+ - Humans: near-ceiling accuracy(99%+) on this controlled task across conditions (see paper for protocol and exact numbers).
79
  - LLMs: accuracy declines approximately log-linearly with the number of updates per key and with the number of concurrent update blocks (details, plots, and model list in our paper).
80
 
81
  ## Quick Start - Evaluate Your Model