Papers
arxiv:2508.05545

PRvL: Quantifying the Capabilities and Risks of Large Language Models for PII Redaction

Published on Aug 7
ยท Submitted by amanchadha on Aug 8
Authors:
,
,
,
,
,

Abstract

A comprehensive analysis of Large Language Models for PII redaction evaluates various architectures and training strategies, providing guidance for accurate, efficient, and privacy-aware redaction systems.

AI-generated summary

Redacting Personally Identifiable Information (PII) from unstructured text is critical for ensuring data privacy in regulated domains. While earlier approaches have relied on rule-based systems and domain-specific Named Entity Recognition (NER) models, these methods fail to generalize across formats and contexts. Recent advances in Large Language Models (LLMs) offer a promising alternative, yet the effect of architectural and training choices on redaction performance remains underexplored. LLMs have demonstrated strong performance in tasks that require contextual language understanding, including the redaction of PII in free-form text. Prior work suggests that with appropriate adaptation, LLMs can become effective contextual privacy learners. However, the consequences of architectural and training choices for PII Redaction remain underexplored. In this work, we present a comprehensive analysis of LLMs as privacy-preserving PII Redaction systems. We evaluate a range of LLM architectures and training strategies for their effectiveness in PII Redaction. Our analysis measures redaction performance, semantic preservation, and PII leakage, and compares these outcomes against latency and computational cost. The results provide practical guidance for configuring LLM-based redactors that are accurate, efficient, and privacy-aware. To support reproducibility and real-world deployment, we release PRvL, an open-source suite of fine-tuned models, and evaluation tools for general-purpose PII Redaction. PRvL is built entirely on open-source LLMs and supports multiple inference settings for flexibility and compliance. It is designed to be easily customized for different domains and fully operable within secure, self-managed environments. This enables data owners to perform redactions without relying on third-party services or exposing sensitive content beyond their own infrastructure.

Community

Paper author Paper submitter

PRvL presents the first comprehensive, open-source benchmark and toolkit for evaluating and deploying LLM-based PII redaction, systematically comparing architectures, training paradigms, and inference strategies to optimize accuracy, efficiency, and privacy leakage control across domains and languages.

โžก๏ธ ๐Š๐ž๐ฒ ๐‡๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ ๐จ๐Ÿ ๐๐‘๐ฏ๐‹:
๐Ÿงช ๐’๐ฒ๐ฌ๐ญ๐ž๐ฆ๐š๐ญ๐ข๐œ ๐Œ๐ฎ๐ฅ๐ญ๐ข-๐€๐ซ๐œ๐ก๐ข๐ญ๐ž๐œ๐ญ๐ฎ๐ซ๐ž ๐๐ž๐ง๐œ๐ก๐ฆ๐š๐ซ๐ค๐ข๐ง๐ : Evaluates six LLM families (Dense, Small, MoE, LRM, SSM, and NER baselines) across fine-tuning, instruction tuning, and RAG, measuring span accuracy, label fidelity, semantic preservation, and privacy leakage (SPriV) under both in-domain and cross-domain/language shifts.
๐Ÿงฉ ๐Ž๐ฉ๐ž๐ง-๐’๐จ๐ฎ๐ซ๐œ๐ž ๐๐ˆ๐ˆ ๐‘๐ž๐๐š๐œ๐ญ๐ข๐จ๐ง ๐’๐ญ๐š๐œ๐ค: Releases PRvL โ€” a reproducible, domain-customizable suite of fine-tuned and instruction-tuned models, retrieval pipelines, and evaluation scripts, supporting secure, self-hosted deployment without third-party dependencies.
๐Ÿง  ๐€๐๐š๐ฉ๐ญ๐š๐ญ๐ข๐จ๐ง ๐๐š๐ซ๐š๐๐ข๐ ๐ฆ ๐ˆ๐ง๐ฌ๐ข๐ ๐ก๐ญ๐ฌ: Demonstrates that instruction tuning consistently outperforms fine-tuning and RAG for PII redaction by reducing mislabels and leakage while preserving span accuracy, with smaller models like DeepSeek-Q1 matching or surpassing large-scale LLMs in efficiencyโ€“accuracy trade-offs.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2508.05545 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.05545 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2508.05545 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.