TextQuests: How Good are LLMs at Text-Based Video Games?
Abstract
TextQuests evaluates AI agents' intrinsic reasoning and problem-solving capabilities in long, exploratory, text-based interactive fiction environments without external tools.
Evaluating AI agents within complex, interactive environments that mirror real-world challenges is critical for understanding their practical capabilities. While existing agent benchmarks effectively assess skills like tool use or performance on structured tasks, they often do not fully capture an agent's ability to operate autonomously in exploratory environments that demand sustained, self-directed reasoning over a long and growing context. To spur the development of agents capable of more robust intrinsic reasoning over long horizons, we introduce TextQuests, a benchmark based on the Infocom suite of interactive fiction games. These text-based adventures, which can take human players over 30 hours and require hundreds of precise actions to solve, serve as an effective proxy for evaluating AI agents on focused, stateful tasks. The benchmark is specifically designed to assess an LLM agent's capacity for self-contained problem-solving by precluding the use of external tools, thereby focusing on intrinsic long-context reasoning capabilities in an exploratory environment characterized by the need for trial-and-error learning and sustained problem-solving within a single interactive session. We release TextQuests at https://textquests.ai.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- StoryBench: A Dynamic Benchmark for Evaluating Long-Term Memory with Multi Turns (2025)
- OmniPlay: Benchmarking Omni-Modal Models on Omni-Modal Game Playing (2025)
- OSWorld-Human: Benchmarking the Efficiency of Computer-Use Agents (2025)
- AgentSynth: Scalable Task Generation for Generalist Computer-Use Agents (2025)
- StarDojo: Benchmarking Open-Ended Behaviors of Agentic Multimodal LLMs in Production-Living Simulations with Stardew Valley (2025)
- Can LLM-Reasoning Models Replace Classical Planning? A Benchmark Study (2025)
- EmbRACE-3K: Embodied Reasoning and Action in Complex Environments (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper