Papers
arxiv:2504.03770

JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model

Published on Apr 3
· Submitted by Chouoftears on Apr 8
Authors:
,
,
,
,

Abstract

Multimodal large language models (MLLMs) excel in vision-language tasks but also pose significant risks of generating harmful content, particularly through jailbreak attacks. Jailbreak attacks refer to intentional manipulations that bypass safety mechanisms in models, leading to the generation of inappropriate or unsafe content. Detecting such attacks is critical to ensuring the responsible deployment of MLLMs. Existing jailbreak detection methods face three primary challenges: (1) Many rely on model hidden states or gradients, limiting their applicability to white-box models, where the internal workings of the model are accessible; (2) They involve high computational overhead from uncertainty-based analysis, which limits real-time detection, and (3) They require fully labeled harmful datasets, which are often scarce in real-world settings. To address these issues, we introduce a test-time adaptive framework called JAILDAM. Our method leverages a memory-based approach guided by policy-driven unsafe knowledge representations, eliminating the need for explicit exposure to harmful data. By dynamically updating unsafe knowledge during test-time, our framework improves generalization to unseen jailbreak strategies while maintaining efficiency. Experiments on multiple VLM jailbreak benchmarks demonstrate that JAILDAM delivers state-of-the-art performance in harmful content detection, improving both accuracy and speed.

Community

Paper author Paper submitter

When VLM Jailbreak detection doesn't need to rely on harmful data.
Our latest work, JailDAM, is a powerful jailbreak detection and defense framework for VLMs, inspired by OOD detection—making it completely independent of harmful training data.

💡 Why does it matter?
Existing VLM jailbreak detection methods struggle with:
1️⃣ White-box dependence – requiring internal model access
2️⃣ High computational costs – making real-time deployment impractical
3️⃣ Harmful data reliance – unable to cover all jailbreak scenarios, making it difficult to encompass all possible attacks
JailDAM overcomes these challenges with policy-driven memories and dynamic test-time adaptation!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.03770 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.03770 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.03770 in a Space README.md to link it from this page.

Collections including this paper 1