Can Community Notes Replace Professional Fact-Checkers?
Abstract
Two commonly-employed strategies to combat the rise of misinformation on social media are (i) fact-checking by professional organisations and (ii) community moderation by platform users. Policy changes by Twitter/X and, more recently, Meta, signal a shift away from partnerships with fact-checking organisations and towards an increased reliance on crowdsourced community notes. However, the extent and nature of dependencies between fact-checking and helpful community notes remain unclear. To address these questions, we use language models to annotate a large corpus of Twitter/X community notes with attributes such as topic, cited sources, and whether they refute claims tied to broader misinformation narratives. Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported. Fact-checking is especially crucial for notes on posts linked to broader narratives, which are twice as likely to reference fact-checking sources compared to other sources. In conclusion, our results show that successful community moderation heavily relies on professional fact-checking.
Community
Fact-checking agencies have come under intense scrutiny in recent months regarding their role in combating misinformation on social media. The current political climate has prompted Meta to shift away from partnerships with fact-checkers to increased reliance on crowdsourced community notes—the model used by Twitter. This work examines how this recent development might impact the efforts to combat misinformation spread on social media by studying fact-checking’s role in Twitter’s community notes.
We find that community notes and professional fact-checking are deeply interconnected. Fact-checkers conduct in-depth research beyond the reach of amateur users, while community notes publicise their work. The move by platforms to end their partnerships and funding for fact-checking organisations will hinder their ability to fact-check and pursue investigative journalism, which community note writers rely on. This, in turn, will limit the efficacy of community notes, particularly for high-stakes claims related to health and broader conspiracy narratives.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Beyond Translation: LLM-Based Data Generation for Multilingual Fact-Checking (2025)
- Community Notes Moderate Engagement With and Diffusion of False Information Online (2025)
- Towards Effective Extraction and Evaluation of Factual Claims (2025)
- Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies (2025)
- Task-Oriented Automatic Fact-Checking with Frame-Semantics (2025)
- Tracking the Takes and Trajectories of English-Language News Narratives across Trustworthy and Worrisome Websites (2025)
- GraphCheck: Breaking Long-Term Text Barriers with Extracted Knowledge Graph-Powered Fact-Checking (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper