Democratizing Diplomacy: A Harness for Evaluating Any Large Language Model on Full-Press Diplomacy
Abstract
An evaluation harness allows large language models to play Diplomacy without fine-tuning, providing insights into their strategic reasoning capabilities.
We present the first evaluation harness that enables any out-of-the-box, local, Large Language Models (LLMs) to play full-press Diplomacy without fine-tuning or specialized training. Previous work required frontier LLMs, or fine-tuning, due to the high complexity and information density of Diplomacy's game state. Combined with the high variance of matches, these factors made Diplomacy prohibitive for study. In this work, we used data-driven iteration to optimize a textual game state representation such that a 24B model can reliably complete matches without any fine tuning. We develop tooling to facilitate hypothesis testing and statistical analysis, and we present case studies on persuasion, aggressive playstyles, and performance across a range of models. We conduct a variety of experiments across many popular LLMs, finding the larger models perform the best, but the smaller models still play adequately. We also introduce Critical State Analysis: an experimental protocol for rapidly iterating and analyzing key moments in a game at depth. Our harness democratizes the evaluation of strategic reasoning in LLMs by eliminating the need for fine-tuning, and it provides insights into how these capabilities emerge naturally from widely used LLMs. Our code is available in the supplement and will be open sourced.
Community
An evaluation harness allowing small language models to play Diplomacy without fine-tuning for the first time along with the weird behaviors and jailbreaks they found while testing.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Who is a Better Player: LLM against LLM (2025)
- SKATE, a Scalable Tournament Eval: Weaker LLMs differentiate between stronger ones using verifiable challenges (2025)
- Can LLMs Play ^O \u{A}n Quan Game? A Study of Multi-Step Planning and Decision Making (2025)
- A Multi-Agent Pokemon Tournament for Evaluating Strategic Reasoning of Large Language Models (2025)
- GVGAI-LLM: Evaluating Large Language Model Agents with Infinite Games (2025)
- Do LLMs Know When to Flip a Coin? Strategic Randomization through Reasoning and Experience (2025)
- Pretraining on the Test Set Is No Longer All You Need: A Debate-Driven Approach to QA Benchmarks (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper