Papers
arxiv:2505.24672

TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data Synthesis

Published on May 30
Ā· Submitted by lizhuang144 on Jun 2
Authors:
,
,
,
,
,
,
,

Abstract

The introduction of TRIDENT, an automated pipeline for generating comprehensive safety alignment datasets, significantly improves the ethical performance of LLMs through reductions in harmful content generation and malicious exploitation.

AI-generated summary

Large Language Models (LLMs) excel in various natural language processing tasks but remain vulnerable to generating harmful content or being exploited for malicious purposes. Although safety alignment datasets have been introduced to mitigate such risks through supervised fine-tuning (SFT), these datasets often lack comprehensive risk coverage. Most existing datasets focus primarily on lexical diversity while neglecting other critical dimensions. To address this limitation, we propose a novel analysis framework to systematically measure the risk coverage of alignment datasets across three essential dimensions: Lexical Diversity, Malicious Intent, and Jailbreak Tactics. We further introduce TRIDENT, an automated pipeline that leverages persona-based, zero-shot LLM generation to produce diverse and comprehensive instructions spanning these dimensions. Each harmful instruction is paired with an ethically aligned response, resulting in two datasets: TRIDENT-Core, comprising 26,311 examples, and TRIDENT-Edge, with 18,773 examples. Fine-tuning Llama 3.1-8B on TRIDENT-Edge demonstrates substantial improvements, achieving an average 14.29% reduction in Harm Score, and a 20% decrease in Attack Success Rate compared to the best-performing baseline model fine-tuned on the WildBreak dataset.

Community

Paper submitter

šŸ“„ TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red Teaming Data Synthesis

This paper presents TRIDENT, a scalable pipeline for generating high coverage alignment datasets by diversifying across three key dimensions: Lexical Diversity, Malicious Intent, and Jailbreak Tactics. It introduces two datasets, TRIDENT-CORE and TRIDENT-EDGE, with TRIDENT-EDGE achieving up to 20 percent reduction in Attack Success Rate and outperforming WILDBREAK and other state-of-the-art baselines across seven safety benchmarks.

šŸ” Key highlights:

• Fully automated persona-based red teaming using zero-shot LLM generation
• 100 class fine-grained malicious intent categorization
• Multiple jailbreak transformations such as Cipher Encoding, Code Injection, and RENELLM
• Publicly available dataset and code: https://github.com/FishT0ucher/TRIDENT

pipeline (1).png

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.24672 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.24672 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.24672 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.