UserBench: An Interactive Gym Environment for User-Centric Agents
Abstract
UserBench evaluates LLM-based agents in multi-turn interactions with simulated users, revealing gaps in task completion and user alignment.
Large Language Models (LLMs)-based agents have made impressive progress in reasoning and tool use, enabling them to solve complex tasks. However, their ability to proactively collaborate with users, especially when goals are vague, evolving, or indirectly expressed, remains underexplored. To address this gap, we introduce UserBench, a user-centric benchmark designed to evaluate agents in multi-turn, preference-driven interactions. UserBench features simulated users who start with underspecified goals and reveal preferences incrementally, requiring agents to proactively clarify intent and make grounded decisions with tools. Our evaluation of leading open- and closed-source LLMs reveals a significant disconnect between task completion and user alignment. For instance, models provide answers that fully align with all user intents only 20% of the time on average, and even the most advanced models uncover fewer than 30% of all user preferences through active interaction. These results highlight the challenges of building agents that are not just capable task executors, but true collaborative partners. UserBench offers an interactive environment to measure and advance this critical capability.
Community
We present UserBench, a gym environment that reveals a major gap between LLMs’ task-solving and tool-use abilities and their effectiveness in understanding and aligning with real user intent.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Goal Alignment in LLM-Based User Simulators for Conversational AI (2025)
- RMTBench: Benchmarking LLMs Through Multi-Turn User-Centric Role-Playing (2025)
- Agent WARPP: Workflow Adherence via Runtime Parallel Personalization (2025)
- Teaching Language Models To Gather Information Proactively (2025)
- MMBench-GUI: Hierarchical Multi-Platform Evaluation Framework for GUI Agents (2025)
- MindFlow+: A Self-Evolving Agent for E-Commerce Customer Service (2025)
- Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper