Papers
arxiv:2505.04600

Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm

Published on May 7

Abstract

An analysis of CivitAI reveals the rise of NSFW content and personalized models mimicking real individuals, highlighting the need for interventions to mitigate harm through better moderation, tool design, and platform policies.

AI-generated summary

Open-source text-to-image (TTI) pipelines have become dominant in the landscape of AI-generated visual content, driven by technological advances that enable users to personalize models through adapters tailored to specific tasks. While personalization methods such as LoRA offer unprecedented creative opportunities, they also facilitate harmful practices, including the generation of non-consensual deepfakes and the amplification of misogynistic or hypersexualized content. This study presents an exploratory sociotechnical analysis of CivitAI, the most active platform for sharing and developing open-source TTI models. Drawing on a dataset of more than 40 million user-generated images and over 230,000 models, we find a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals. We also observe a strong influence of internet subcultures on the tools and practices shaping model personalizations and resulting visual media. In response to these findings, we contextualize the emergence of exploitative visual media through feminist and constructivist perspectives on technology, emphasizing how design choices and community dynamics shape platform outcomes. Building on this analysis, we propose interventions aimed at mitigating downstream harm, including improved content moderation, rethinking tool design, and establishing clearer platform policies to promote accountability and consent.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.04600 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.04600 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.04600 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.