PRvL: Quantifying the Capabilities and Risks of Large Language Models for PII Redaction
Abstract
A comprehensive analysis of Large Language Models for PII redaction evaluates various architectures and training strategies, providing guidance for accurate, efficient, and privacy-aware redaction systems.
Redacting Personally Identifiable Information (PII) from unstructured text is critical for ensuring data privacy in regulated domains. While earlier approaches have relied on rule-based systems and domain-specific Named Entity Recognition (NER) models, these methods fail to generalize across formats and contexts. Recent advances in Large Language Models (LLMs) offer a promising alternative, yet the effect of architectural and training choices on redaction performance remains underexplored. LLMs have demonstrated strong performance in tasks that require contextual language understanding, including the redaction of PII in free-form text. Prior work suggests that with appropriate adaptation, LLMs can become effective contextual privacy learners. However, the consequences of architectural and training choices for PII Redaction remain underexplored. In this work, we present a comprehensive analysis of LLMs as privacy-preserving PII Redaction systems. We evaluate a range of LLM architectures and training strategies for their effectiveness in PII Redaction. Our analysis measures redaction performance, semantic preservation, and PII leakage, and compares these outcomes against latency and computational cost. The results provide practical guidance for configuring LLM-based redactors that are accurate, efficient, and privacy-aware. To support reproducibility and real-world deployment, we release PRvL, an open-source suite of fine-tuned models, and evaluation tools for general-purpose PII Redaction. PRvL is built entirely on open-source LLMs and supports multiple inference settings for flexibility and compliance. It is designed to be easily customized for different domains and fully operable within secure, self-managed environments. This enables data owners to perform redactions without relying on third-party services or exposing sensitive content beyond their own infrastructure.
Community
PRvL presents the first comprehensive, open-source benchmark and toolkit for evaluating and deploying LLM-based PII redaction, systematically comparing architectures, training paradigms, and inference strategies to optimize accuracy, efficiency, and privacy leakage control across domains and languages.
โก๏ธ ๐๐๐ฒ ๐๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ ๐จ๐ ๐๐๐ฏ๐:
๐งช ๐๐ฒ๐ฌ๐ญ๐๐ฆ๐๐ญ๐ข๐ ๐๐ฎ๐ฅ๐ญ๐ข-๐๐ซ๐๐ก๐ข๐ญ๐๐๐ญ๐ฎ๐ซ๐ ๐๐๐ง๐๐ก๐ฆ๐๐ซ๐ค๐ข๐ง๐ : Evaluates six LLM families (Dense, Small, MoE, LRM, SSM, and NER baselines) across fine-tuning, instruction tuning, and RAG, measuring span accuracy, label fidelity, semantic preservation, and privacy leakage (SPriV) under both in-domain and cross-domain/language shifts.
๐งฉ ๐๐ฉ๐๐ง-๐๐จ๐ฎ๐ซ๐๐ ๐๐๐ ๐๐๐๐๐๐ญ๐ข๐จ๐ง ๐๐ญ๐๐๐ค: Releases PRvL โ a reproducible, domain-customizable suite of fine-tuned and instruction-tuned models, retrieval pipelines, and evaluation scripts, supporting secure, self-hosted deployment without third-party dependencies.
๐ง ๐๐๐๐ฉ๐ญ๐๐ญ๐ข๐จ๐ง ๐๐๐ซ๐๐๐ข๐ ๐ฆ ๐๐ง๐ฌ๐ข๐ ๐ก๐ญ๐ฌ: Demonstrates that instruction tuning consistently outperforms fine-tuning and RAG for PII redaction by reducing mislabels and leakage while preserving span accuracy, with smaller models like DeepSeek-Q1 matching or surpassing large-scale LLMs in efficiencyโaccuracy trade-offs.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation (2025)
- SoK: Semantic Privacy in Large Language Models (2025)
- PrivacyXray: Detecting Privacy Breaches in LLMs through Semantic Consistency and Probability Certainty (2025)
- Model Inversion Attacks on Llama 3: Extracting PII from Large Language Models (2025)
- What Should LLMs Forget? Quantifying Personal Data in LLMs for Right-to-Be-Forgotten Requests (2025)
- Enterprise Large Language Model Evaluation Benchmark (2025)
- Revisiting Pre-trained Language Models for Vulnerability Detection (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper