Humans expect rationality and cooperation from LLM opponents in strategic games
Abstract
Human subjects in a p-beauty contest choose lower numbers when playing against LLMs, driven by zero Nash-equilibrium choices, which highlights differences in interaction and strategic reasoning.
As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of `zero' Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM's reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects' behaviour and beliefs about LLM's play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.
Community
Turns out that humans, particularly those with high strategic reasoning, choose lower numbers and expect more rational and cooperative behavior when playing a p-beauty contest game against Large Language Models (LLMs) compared to human opponents
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- The Influence of Human-inspired Agentic Sophistication in LLM-driven Strategic Reasoners (2025)
- How Irrationality Shapes Nash Equilibria: A Prospect-Theoretic Perspective (2025)
- FAIRGAME: a Framework for AI Agents Bias Recognition using Game Theory (2025)
- Can Generative AI agents behave like humans? Evidence from laboratory market experiments (2025)
- Compendium of Advances in Game Theory: Classical, Differential, Algorithmic, Non-Archimedean and Quantum Game (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper