TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data Synthesis
Abstract
The introduction of TRIDENT, an automated pipeline for generating comprehensive safety alignment datasets, significantly improves the ethical performance of LLMs through reductions in harmful content generation and malicious exploitation.
Large Language Models (LLMs) excel in various natural language processing tasks but remain vulnerable to generating harmful content or being exploited for malicious purposes. Although safety alignment datasets have been introduced to mitigate such risks through supervised fine-tuning (SFT), these datasets often lack comprehensive risk coverage. Most existing datasets focus primarily on lexical diversity while neglecting other critical dimensions. To address this limitation, we propose a novel analysis framework to systematically measure the risk coverage of alignment datasets across three essential dimensions: Lexical Diversity, Malicious Intent, and Jailbreak Tactics. We further introduce TRIDENT, an automated pipeline that leverages persona-based, zero-shot LLM generation to produce diverse and comprehensive instructions spanning these dimensions. Each harmful instruction is paired with an ethically aligned response, resulting in two datasets: TRIDENT-Core, comprising 26,311 examples, and TRIDENT-Edge, with 18,773 examples. Fine-tuning Llama 3.1-8B on TRIDENT-Edge demonstrates substantial improvements, achieving an average 14.29% reduction in Harm Score, and a 20% decrease in Attack Success Rate compared to the best-performing baseline model fine-tuned on the WildBreak dataset.
Community
š TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red Teaming Data Synthesis
This paper presents TRIDENT, a scalable pipeline for generating high coverage alignment datasets by diversifying across three key dimensions: Lexical Diversity, Malicious Intent, and Jailbreak Tactics. It introduces two datasets, TRIDENT-CORE and TRIDENT-EDGE, with TRIDENT-EDGE achieving up to 20 percent reduction in Attack Success Rate and outperforming WILDBREAK and other state-of-the-art baselines across seven safety benchmarks.
š Key highlights:
⢠Fully automated persona-based red teaming using zero-shot LLM generation
⢠100 class fine-grained malicious intent categorization
⢠Multiple jailbreak transformations such as Cipher Encoding, Code Injection, and RENELLM
⢠Publicly available dataset and code: https://github.com/FishT0ucher/TRIDENT
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RainbowPlus: Enhancing Adversarial Prompt Generation via Evolutionary Quality-Diversity Search (2025)
- Logic Jailbreak: Efficiently Unlocking LLM Safety Restrictions Through Formal Logical Expression (2025)
- FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning (2025)
- Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets (2025)
- Accidental Misalignment: Fine-Tuning Language Models Induces Unexpected Vulnerability (2025)
- OET: Optimization-based prompt injection Evaluation Toolkit (2025)
- Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-Judge (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper