Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -195,6 +195,15 @@ A former professional architect turned neuroscientist, Jiaqiu draws on his backg
|
|
195 | |
196 |
|
197 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
198 |
## Quick Start - Evaluate Your Model
|
199 |
|
200 |
```python
|
@@ -491,6 +500,7 @@ def compute_total_pi_auc(all_tests, log_base=1.5):
|
|
491 |
```
|
492 |
|
493 |
|
|
|
494 |
## References
|
495 |
-
|
496 |
- PI-LLM demo site: https://sites.google.com/view/cog4llm
|
@@ -505,4 +515,4 @@ def compute_total_pi_auc(all_tests, log_base=1.5):
|
|
505 |
primaryClass={cs.CL},
|
506 |
url={https://arxiv.org/abs/2506.08184},
|
507 |
}
|
508 |
-
```
|
|
|
195 | |
196 |
|
197 |
|
198 |
+
# PI-LLM Dataset File List
|
199 |
+
|
200 |
+
This repository hosts the **PI-LLM** dataset.
|
201 |
+
Currently it includes two files:
|
202 |
+
|
203 |
+
- **core.parquet** → Main dataset (randomized updates). Recommended as the primary/SOTA comparison setting; All tested models fail to reliably retrieve the last value.
|
204 |
+
- **sequential_additional.parquet** → Sequential mode (non-randomized, strict per-key ordered update blocks). Trivial for humans yet still challenging for many LLMs; smaller (all <600B) models are especially affected, with proactive-interference effects clearly exposed (even in short contexts, ~5–8k tokens).
|
205 |
+
|
206 |
+
|
207 |
## Quick Start - Evaluate Your Model
|
208 |
|
209 |
```python
|
|
|
500 |
```
|
501 |
|
502 |
|
503 |
+
```
|
504 |
## References
|
505 |
-
|
506 |
- PI-LLM demo site: https://sites.google.com/view/cog4llm
|
|
|
515 |
primaryClass={cs.CL},
|
516 |
url={https://arxiv.org/abs/2506.08184},
|
517 |
}
|
518 |
+
```
|