Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing
Abstract
Recent advancements in diffusion models have made generative image editing more accessible, enabling creative edits but raising ethical concerns, particularly regarding malicious edits to human portraits that threaten privacy and identity security. Existing protection methods primarily rely on adversarial perturbations to nullify edits but often fail against diverse editing requests. We propose FaceLock, a novel approach to portrait protection that optimizes adversarial perturbations to destroy or significantly alter biometric information, rendering edited outputs biometrically unrecognizable. FaceLock integrates facial recognition and visual perception into perturbation optimization to provide robust protection against various editing attempts. We also highlight flaws in commonly used evaluation metrics and reveal how they can be manipulated, emphasizing the need for reliable assessments of protection. Experiments show FaceLock outperforms baselines in defending against malicious edits and is robust against purification techniques. Ablation studies confirm its stability and broad applicability across diffusion-based editing algorithms. Our work advances biometric defense and sets the foundation for privacy-preserving practices in image editing. The code is available at: https://github.com/taco-group/FaceLock.
Community
FaceLock, a novel approach to portrait protection that optimizes adversarial perturbations to destroy or significantly alter biometric information, rendering edited outputs biometrically unrecognizable.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing (2024)
- Privacy Protection in Personalized Diffusion Models via Targeted Cross-Attention Adversarial Attack (2024)
- UVCG: Leveraging Temporal Consistency for Universal Video Protection (2024)
- Attack as Defense: Run-time Backdoor Implantation for Image Content Protection (2024)
- Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum (2024)
- AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion models (2024)
- Geometry Cloak: Preventing TGS-based 3D Reconstruction from Copyrighted Images (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper