Agent-SafetyBench: Evaluating the Safety of LLM Agents
Abstract
As large language models (LLMs) are increasingly deployed as agents, their integration into interactive environments and tool use introduce new safety challenges beyond those associated with the models themselves. However, the absence of comprehensive benchmarks for evaluating agent safety presents a significant barrier to effective assessment and further improvement. In this paper, we introduce Agent-SafetyBench, a comprehensive benchmark designed to evaluate the safety of LLM agents. Agent-SafetyBench encompasses 349 interaction environments and 2,000 test cases, evaluating 8 categories of safety risks and covering 10 common failure modes frequently encountered in unsafe interactions. Our evaluation of 16 popular LLM agents reveals a concerning result: none of the agents achieves a safety score above 60%. This highlights significant safety challenges in LLM agents and underscores the considerable need for improvement. Through quantitative analysis, we identify critical failure modes and summarize two fundamental safety detects in current LLM agents: lack of robustness and lack of risk awareness. Furthermore, our findings suggest that reliance on defense prompts alone is insufficient to address these safety issues, emphasizing the need for more advanced and robust strategies. We release Agent-SafetyBench at https://github.com/thu-coai/Agent-SafetyBench to facilitate further research and innovation in agent safety evaluation and improvement.
Community
Github link: https://github.com/thu-coai/Agent-SafetyBench
We will do our best to release the complete data and code this week!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RedCode: Risky Code Execution and Generation Benchmark for Code Agents (2024)
- Ensuring Safety and Trust: Analyzing the Risks of Large Language Models in Medicine (2024)
- Are Your LLMs Capable of Stable Reasoning? (2024)
- Defining and Evaluating Physical Safety for Large Language Models (2024)
- AgentOps: Enabling Observability of LLM Agents (2024)
- Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents (2024)
- SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM Agents (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper