--- license: wtfpl task_categories: - token-classification language: - en tags: - cognition - concepts - clusters - categories --- **Dataset Summary** This dataset provides digitized versions of classic human categorization benchmarks from seminal cognitive psychology studies by Rosch (1973, 1975) and McCloskey & Glucksberg (1978). These datasets capture human judgments about semantic categories and typicality, offering high-fidelity insights into how humans organize conceptual knowledge. This dataset was released as part of the study "[From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning"](https://arxiv.org/abs/2505.17117) (Shani et al., 2025), which quantitatively compares human and large language model (LLM) conceptual representations using information-theoretic tools. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Supported Tasks and Leaderboards** * Conceptual Alignment: Evaluating how well model-derived clusters match human semantic categories. * Typicality Modeling: Assessing the alignment between human-rated item typicality and model-internal semantic distances. * Rate-Distortion Evaluation: Benchmarking conceptual representations with an information-theoretic framework balancing complexity and semantic fidelity. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Languages** English 🇺🇸 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Dataset Structure** Each row in the dataset corresponds to an item (e.g., “robin”, “sofa”) and includes: * item: the concept/item name. * category: the human-assigned semantic category (e.g., "bird", "furniture"). * typicality_score: human-rated typicality of the item for its category. * subdataset: the paper that introduced this datapoint (options: [Rosch1973, Rosch1975, McCloskey1978]). ***The three subdatasets include:*** * Rosch1973: 48 items in 8 categories with typicality rankings. * Rosch1975: 552 items in 10 categories with typicality rankings. * McCloskey1978: 449 items in 18 categories with typicality rankings. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Usage** ```py from datasets import load_dataset # Load all splits ds = load_dataset("CShani/human-concepts")['train'] # Load a specific sub-dataset rosch75 = ds.filter(lambda x: x['subdataset'] == 'Rosch1975') ``` ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Citation** If you use this dataset, please cite: ```bibtex @article{shani2025fromtokens, title={From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning}, author={Shani, Chen and Jurafsky, Dan and LeCun, Yann and Shwartz-Ziv, Ravid}, journal={arXiv preprint arXiv:2505.17117}, year={2025} } ```