JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model
Abstract
Multimodal large language models (MLLMs) excel in vision-language tasks but also pose significant risks of generating harmful content, particularly through jailbreak attacks. Jailbreak attacks refer to intentional manipulations that bypass safety mechanisms in models, leading to the generation of inappropriate or unsafe content. Detecting such attacks is critical to ensuring the responsible deployment of MLLMs. Existing jailbreak detection methods face three primary challenges: (1) Many rely on model hidden states or gradients, limiting their applicability to white-box models, where the internal workings of the model are accessible; (2) They involve high computational overhead from uncertainty-based analysis, which limits real-time detection, and (3) They require fully labeled harmful datasets, which are often scarce in real-world settings. To address these issues, we introduce a test-time adaptive framework called JAILDAM. Our method leverages a memory-based approach guided by policy-driven unsafe knowledge representations, eliminating the need for explicit exposure to harmful data. By dynamically updating unsafe knowledge during test-time, our framework improves generalization to unseen jailbreak strategies while maintaining efficiency. Experiments on multiple VLM jailbreak benchmarks demonstrate that JAILDAM delivers state-of-the-art performance in harmful content detection, improving both accuracy and speed.
Community
When VLM Jailbreak detection doesn't need to rely on harmful data.
Our latest work, JailDAM, is a powerful jailbreak detection and defense framework for VLMs, inspired by OOD detection—making it completely independent of harmful training data.
💡 Why does it matter?
Existing VLM jailbreak detection methods struggle with:
1️⃣ White-box dependence – requiring internal model access
2️⃣ High computational costs – making real-time deployment impractical
3️⃣ Harmful data reliance – unable to cover all jailbreak scenarios, making it difficult to encompass all possible attacks
JailDAM overcomes these challenges with policy-driven memories and dynamic test-time adaptation!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- HiddenDetect: Detecting Jailbreak Attacks against Large Vision-Language Models via Monitoring Hidden States (2025)
- TAIJI: Textual Anchoring for Immunizing Jailbreak Images in Vision Language Models (2025)
- STShield: Single-Token Sentinel for Real-Time Jailbreak Detection in Large Language Models (2025)
- Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense (2025)
- Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks (2025)
- Reinforced Diffuser for Red Teaming Large Vision-Language Models (2025)
- SafeInt: Shielding Large Language Models from Jailbreak Attacks via Safety-Aware Representation Intervention (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper