The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
UniToMBench Dataset
Dataset Summary
UniToMBench is a unified benchmark designed to evaluate the Theory of Mind (ToM) reasoning abilities of large language models (LLMs). It integrates and extends existing ToM benchmarks by providing narrative-based multiple-choice questions (MCQs) that span a wide range of ToM tasks—including false belief reasoning, perspective taking, emotion attribution, scalar implicature, and more.
This dataset supports the research paper "UniToMBench: A Unified Benchmark for Advancing Theory of Mind in Large Language Models", accepted to NAACL 2025.
Supported Tasks and Annotations
The dataset includes three subcomponents, each designed to evaluate different dimensions of Theory of Mind reasoning:
1. ToMBench Standard Benchmark
- File:
ToMBench_release_v1_0618.xlsx
- Tasks: False belief, first-order/second-order belief, desires, and more
- Format: Narrative + MCQ
2. Evolving Stories Dataset
- File:
evolving_stories_250.xlsx
- Format: Multi-paragraph stories that evolve over time
- Purpose: Test long-range mental state tracking
3. Multi-Interaction Dataset
- File:
multi_interaction_100.xlsx
- Format: Multi-turn dialogue-based scenarios
- Purpose: Simulate conversational ToM and dynamic perspective shifts
Each sample contains:
- A story or dialogue
- A question
- Multiple answer options
- The correct answer
- (Optional) metadata such as task type or character labels
Use Cases
- Evaluating general-purpose LLMs on human-like reasoning
- Comparing direct QA vs. multi-step reasoning pipelines
- Fine-tuning or prompting models for ToM-sensitive applications like tutoring agents or social companions
Dataset Structure
Each .xlsx
file consists of rows formatted as follows:
Story | Question | Option A | Option B | Option C | Option D | Correct Answer | Task Type | Notes |
---|
- Formats are standardized across all files.
- Metadata such as
Task Type
andNotes
help filter or categorize samples.
Citation
If you use this dataset, please cite:
@misc{unito2025, title={UniToMBench: A Unified Benchmark for Advancing Theory of Mind in Large Language Models}, author={Shamant Sai and Prameshwar Thiyagarajan and Vaishnavi Parimi and Soumil Garg}, year={2025}, eprint={2506.09450}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Available at: https://arxiv.org/abs/2506.09450
License
MIT License
- Downloads last month
- 31