license: mit
language:
- en
pretty_name: Simple Bench Public (20-12-2024)
size_categories:
- n<1K
task_categories:
- question-answering
Simple Bench Public
Where Everyday Human Reasoning Still Surpasses Frontier Models.
Dataset Details
Dataset Description
"[...] A multiple-choice text benchmark for LLMs where individuals with unspecialized (high school) knowledge outperform SOTA models. SimpleBench includes over 200 questions covering spatio-temporal reasoning, social intelligence, and what we call linguistic adversarial robustness (or trick questions). For the vast majority of text-based benchmarks LLMs outperform a non-specialized human, and increasingly, exceed expert human performance. However, on SimpleBench, a non-specialized human baseline is 83.7%, based on our small sample of nine participants, outperforming all 13 tested LLMs, including o1-preview, which scored 41.7%. While we expect model performance to improve over time, the results of SimpleBench confirm that the memorized knowledge, and approximate reasoning retrieval, utilized by frontier LLMs is not always enough to answer basic questions just yet." - (Philip and Hemang, 2024)
- Curated by: SimpleBench Team
- Funded by N/A
- Shared by James David Clarke
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Repository Snapshot (for this version of the dataset): https://github.com/simple-bench/SimpleBench/tree/fbc2e429085bdedad7d1a236d2bc9bc18c95f16e
- Paper: https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view
- Demo: https://simple-bench.com/
Uses
This tiny dataset(10 entries), comprises of tasks that are easy for humans but still difficult for LLMs, the dataset gets updated by the SimpleBench team. This is only the publicly available version of the dataset.
Direct Use
Intended use is quite niche, its goal is to evaluate LLM's against problems humans find easy to solve with pen & paper, while being difficult for LLM's.
Its simple to evaluate due to it being only 10 questions, due to it being multiple choice, its easy to automate too, (no Judge LLM needed, just extract the guessed option).
Out-of-Scope Use
Needless to say, advise against training your model on this dataset (model contamination is bad), or other immoral/illegal uses.
It is not ideal for most real world use cases due to the small sample size (10 Q&A pairs).
Its also multiple choice, with a 1 in 6 chance of guessing correct by chance, false positives can happen.
Dataset Structure
"eval_data": [
{
"question_id": int
"prompt": str
"answer": str
}, etc...
]
eval_data
: an array of JSON objects, containing each question.question_id
: a integer staring at1
, incremented by 1 for each question.prompt
: a string containing the question.answer
: a single character ranging fromA-F
(UPPERCASED).
Dataset Creation
Curation Rationale
This author published this here for easy access to the dataset, as the dataset gets updated over time, different versions might appear with the relevent datestamp.
Source Data
Made by hand by the SimpleBench Team
Data Collection and Processing
Was done by hand by the SimpleBench Team.
Who are the source data producers?
Handcrafted by the SimpleBench team.
Personal and Sensitive Information
The dataset contains no PII or sensitive information.
Bias, Risks, and Limitations
"As a small, self-funded team, we lacked the resources to recruit enough volunteers for statistically robust human averages, such as those in H-ARC5" - (Philip and Hemang, 2024)
Recommendations
This dataset is tiny, 10 questions, as such it's not representitive of real world use cases, it only highlights where tasks that are simple for humans are still hard for Large Language Models, that is the scope.
Citation
Philip and Hemang (2024) SimpleBench: The Text Benchmark in which Unspecialized Human Performance Exceeds that of Current Frontier Models, Google Docs. Available at: https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view?usp=embed_facebook (Accessed: 18 April 2025).
BibTeX:
@report{philip2024simplebench,
title = {SimpleBench: The Text Benchmark in which Unspecialized Human Performance Exceeds that of Current Frontier Models},
author = {Philip and Hemang},
year = {2024},
howpublished = {Google Docs},
url = {https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view?usp=embed_facebook},
note = {Accessed: 18 April 2025}
}
APA:
Philip, & Hemang. (2024, October 31). SimpleBench: The Text Benchmark in which Unspecialized Human Performance Exceeds that of Current Frontier Models. Google Docs. https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view?usp=embed_facebook