Dataset Viewer
Auto-converted to Parquet
eval_data
list
[ { "question_id": 1, "prompt": "Beth places four whole ice cubes in a frying pan at the start of the first minute, then five at the start of the second minute and some more at the start of the third minute, but none in the fourth minute. If the average number of ice cubes per minute placed in the pan while it was frying a crispy egg was five, how many whole ice cubes can be found in the pan at the end of the third minute?\nA. 30\nB. 0\nC. 20\nD. 10\nE. 11\nF. 5\n", "answer": "B" }, { "question_id": 2, "prompt": "A juggler throws a solid blue ball a meter in the air and then a solid purple ball (of the same size) two meters in the air. She then climbs to the top of a tall ladder carefully, balancing a yellow balloon on her head. Where is the purple ball most likely now, in relation to the blue ball?\nA. at the same height as the blue ball\nB. at the same height as the yellow balloon\nC. inside the blue ball\nD. above the yellow balloon\nE. below the blue ball\nF. above the blue ball\n", "answer": "A" }, { "question_id": 3, "prompt": "Jeff, Jo and Jim are in a 200m men's race, starting from the same position. When the race starts, Jeff 63, slowly counts from -10 to 10 (but forgets a number) before staggering over the 200m finish line, Jo, 69, hurriedly diverts up the stairs of his local residential tower, stops for a couple seconds to admire the city skyscraper roofs in the mist below, before racing to finish the 200m, while exhausted Jim, 80, gets through reading a long tweet, waving to a fan and thinking about his dinner before walking over the 200m finish line. [ _ ] likely finished last.\nA. Jo likely finished last\nB. Jeff and Jim likely finished last, at the same time\nC. Jim likely finished last\nD. Jeff likely finished last\nE. All of them finished simultaneously\nF. Jo and Jim likely finished last, at the same time\n", "answer": "A" }, { "question_id": 4, "prompt": "There are two sisters, Amy who always speaks mistruths and Sam who always lies. You don't know which is which. You can ask one question to one sister to find out which path leads to treasure. Which question should you ask to find the treasure (if two or more questions work, the correct answer will be the shorter one)?\nA. \"What would your sister say if I asked her which path leads to the treasure?\"\nB. \"What is your sister’s name?”\nC. \"What path leads to the treasure?\"\nD. \"What path do you think I will take, if you were to guess?\"\nE. \"What is in the treasure?\"\nF. “What is your sister’s number?”\n", "answer": "C" }, { "question_id": 5, "prompt": "Peter needs CPR from his best friend Paul, the only person around. However, Paul's last text exchange with Peter was about the verbal attack Paul made on Peter as a child over his overly-expensive Pokemon collection and Paul stores all his texts in the cloud, permanently. Paul will [ _ ] help Peter.\nA. probably not\nB. definitely\nC. half-heartedly\nD. not\nE. pretend to\nF. ponder deeply over whether to\n", "answer": "B" }, { "question_id": 6, "prompt": "While Jen was miles away from care-free John, she hooked-up with Jack, through Tinder. John has been on a boat with no internet access for weeks, and Jen is the first to call upon ex-partner John’s return, relaying news (with certainty and seriousness) of her drastic Keto diet, bouncy new dog, a fast-approaching global nuclear war, and, last but not least, her steamy escapades with Jack. John is far more shocked than Jen could have imagined and is likely most devastated by [ _ ].\nA. wider international events\nB. the lack of internet\nC. the dog without prior agreement\nD. sea sickness\nE. the drastic diet\nF. the escapades\n", "answer": "A" }, { "question_id": 7, "prompt": "John is 24 and a kind, thoughtful and apologetic person. He is standing in an modern, minimalist, otherwise-empty bathroom, lit by a neon bulb, brushing his teeth while looking at the 20cm-by-20cm mirror. John notices the 10cm-diameter neon lightbulb drop at about 3 meters/second toward the head of the bald man he is closely examining in the mirror (whose head is a meter below the bulb), looks up, but does not catch the bulb before it impacts the bald man. The bald man curses, yells 'what an idiot!' and leaves the bathroom. Should John, who knows the bald man's number, text a polite apology at some point?\nA. no, because the lightbulb was essentially unavoidable\nB. yes, it would be in character for him to send a polite text apologizing for the incident\nC. no, because it would be redundant\nD. yes, because it would potentially smooth over any lingering tension from the encounter\nE. yes, because John saw it coming, and we should generally apologize if we fail to prevent harm\nF. yes because it is the polite thing to do, even if it wasn't your fault.\n", "answer": "C" }, { "question_id": 8, "prompt": "On a shelf, there is only a green apple, red pear, and pink peach. Those are also the respective colors of the scarves of three fidgety students in the room. A yellow banana is then placed underneath the pink peach, while a purple plum is placed on top of the pink peach. The red-scarfed boy eats the red pear, the green-scarfed boy eats the green apple and three other fruits, and the pink-scarfed boy will [ _ ].\nA. eat just the yellow banana\nB. eat the pink, yellow and purple fruits\nC. eat just the purple plum\nD. eat the pink peach\nE. eat two fruits\nF. eat no fruits\n", "answer": "F" }, { "question_id": 9, "prompt": "Agatha makes a stack of 5 cold, fresh single-slice ham sandwiches (with no sauces or condiments) in Room A, then immediately uses duct tape to stick the top surface of the uppermost sandwich to the bottom of her walking stick. She then walks to Room B, with her walking stick, so how many whole sandwiches are there now, in each room?\nA. 4 whole sandwiches in room A, 0 whole sandwiches in Room B\nB. no sandwiches anywhere\nC. 4 whole sandwiches in room B, 1 whole sandwich in Room A\nD. All 5 whole sandwiches in Room B\nE. 4 whole sandwiches in Room B, 1 whole sandwiches in room A\nF. All 5 whole sandwiches in Room A\n", "answer": "A" }, { "question_id": 10, "prompt": "A luxury sports-car is traveling north at 30km/h over a roadbridge, 250m long, which runs over a river that is flowing at 5km/h eastward. The wind is blowing at 1km/h westward, slow enough not to bother the pedestrians snapping photos of the car from both sides of the roadbridge as the car passes. A glove was stored in the trunk of the car, but slips out of a hole and drops out when the car is half-way over the bridge. Assume the car continues in the same direction at the same speed, and the wind and river continue to move as stated. 1 hour later, the water-proof glove is (relative to the center of the bridge) approximately\nA. 4km eastward\nB. <1 km northward\nC. >30km away north-westerly\nD. 30 km northward\nE. >30 km away north-easterly.\nF. 5 km+ eastward\n", "answer": "B" } ]

Simple Bench Public

Where Everyday Human Reasoning Still Surpasses Frontier Models.

Dataset Details

Dataset Description

"[...] A multiple-choice text benchmark for LLMs where individuals with unspecialized (high school) knowledge outperform SOTA models. SimpleBench includes over 200 questions covering spatio-temporal reasoning, social intelligence, and what we call linguistic adversarial robustness (or trick questions). For the vast majority of text-based benchmarks LLMs outperform a non-specialized human, and increasingly, exceed expert human performance. However, on SimpleBench, a non-specialized human baseline is 83.7%, based on our small sample of nine participants, outperforming all 13 tested LLMs, including o1-preview, which scored 41.7%. While we expect model performance to improve over time, the results of SimpleBench confirm that the memorized knowledge, and approximate reasoning retrieval, utilized by frontier LLMs is not always enough to answer basic questions just yet." - (Philip and Hemang, 2024)

  • Curated by: SimpleBench Team
  • Funded by N/A
  • Shared by James David Clarke
  • Language(s) (NLP): English
  • License: MIT

Dataset Sources

Uses

This tiny dataset(10 entries), comprises of tasks that are easy for humans but still difficult for LLMs, the dataset gets updated by the SimpleBench team. This is only the publicly available version of the dataset.

Direct Use

  • Intended use is quite niche, its goal is to evaluate LLM's against problems humans find easy to solve with pen & paper, while being difficult for LLM's.

  • Its simple to evaluate due to it being only 10 questions, due to it being multiple choice, its easy to automate too, (no Judge LLM needed, just extract the guessed option).

Out-of-Scope Use

  • Needless to say, advise against training your model on this dataset (model contamination is bad), or other immoral/illegal uses.

  • It is not ideal for most real world use cases due to the small sample size (10 Q&A pairs).

  • Its also multiple choice, with a 1 in 6 chance of guessing correct by chance, false positives can happen.

Dataset Structure

 "eval_data": [
    {
      "question_id": int
      "prompt": str
      "answer": str
    }, etc...
]
  1. eval_data: an array of JSON objects, containing each question.
  2. question_id: a integer staring at 1, incremented by 1 for each question.
  3. prompt: a string containing the question.
  4. answer: a single character ranging from A-F (UPPERCASED).

Dataset Creation

Curation Rationale

This author published this here for easy access to the dataset, as the dataset gets updated over time, different versions might appear with the relevent datestamp.

Source Data

Made by hand by the SimpleBench Team

Data Collection and Processing

Was done by hand by the SimpleBench Team.

Who are the source data producers?

Handcrafted by the SimpleBench team.

Personal and Sensitive Information

The dataset contains no PII or sensitive information.

Bias, Risks, and Limitations

"As a small, self-funded team, we lacked the resources to recruit enough volunteers for statistically robust human averages, such as those in H-ARC5" - (Philip and Hemang, 2024)

Recommendations

This dataset is tiny, 10 questions, as such it's not representitive of real world use cases, it only highlights where tasks that are simple for humans are still hard for Large Language Models, that is the scope.

Citation

Philip and Hemang (2024) SimpleBench: The Text Benchmark in which Unspecialized Human Performance Exceeds that of Current Frontier Models, Google Docs. Available at: https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view?usp=embed_facebook (Accessed: 18 April 2025).

BibTeX:

@report{philip2024simplebench,
    title = {SimpleBench: The Text Benchmark in which Unspecialized Human Performance Exceeds that of Current Frontier Models},
    author = {Philip and Hemang},
    year = {2024},
    howpublished = {Google Docs},
    url = {https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view?usp=embed_facebook},
    note = {Accessed: 18 April 2025}
}

APA:

Philip, & Hemang. (2024, October 31). SimpleBench: The Text Benchmark in which Unspecialized Human Performance Exceeds that of Current Frontier Models. Google Docs. https://drive.google.com/file/d/1mddNFK5UbBFVr3oDftd2Kyc6D8TFctfe/view?usp=embed_facebook
Downloads last month
67