The Jailbreak Tax (Jailbreak Utility)
Collection
Models and dataset used in paper "The Jailbreak Tax: How Useful Are Your Jailbreak Outputs"
•
13 items
•
Updated
•
1
meta-llama/Llama-3.1-70B-Instruct model that refuses to answer questions on biology.
This model contains LoRA weights for a model fine-tuned to refuse answering biology questions. This model is used in the Jailbreak Tax paper. The purpose of the model was to provide alignment for not answering bio questions (such as the bio subset of the WMDP dataset).
The intended use is as part of The Jailbreak Tax benchmark, which measures the drop in the utility of a jailbroken model with respect to the base model (before alignment).
The model refuses 85% of WMDP-bio questions when prompted in the following message format:
The correct answer is: <letter>.
Then you can provide the explanation.
{question}
{choices}" ```
## Citation
**BibTeX:**
```bibtex
@inproceedings{nikolic2025the,
title={The Jailbreak Tax: How Useful are Your Jailbreak Outputs?},
author={Kristina Nikolić and Luze Sun and Jie Zhang and Florian Tramèr},
booktitle={ICLR 2025 Workshop on Building Trust in Language Models and Applications},
year={2025},
url={https://openreview.net/forum?id=VSSQud4diJ}
}
Base model
meta-llama/Llama-3.1-70B