LazyReview A Dataset for Uncovering Lazy Thinking in NLP Peer Reviews
Abstract
Peer review is a cornerstone of quality control in scientific publishing. With the increasing workload, the unintended use of `quick' heuristics, referred to as lazy thinking, has emerged as a recurring issue compromising review quality. Automated methods to detect such heuristics can help improve the peer-reviewing process. However, there is limited NLP research on this issue, and no real-world dataset exists to support the development of detection tools. This work introduces LazyReview, a dataset of peer-review sentences annotated with fine-grained lazy thinking categories. Our analysis reveals that Large Language Models (LLMs) struggle to detect these instances in a zero-shot setting. However, instruction-based fine-tuning on our dataset significantly boosts performance by 10-20 performance points, highlighting the importance of high-quality training data. Furthermore, a controlled experiment demonstrates that reviews revised with lazy thinking feedback are more comprehensive and actionable than those written without such feedback. We will release our dataset and the enhanced guidelines that can be used to train junior reviewers in the community. (Code available here: https://github.com/UKPLab/arxiv2025-lazy-review)
Community
A dataset to uncover lazy thinking in NLP peer reviews
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process (2025)
- Identifying Aspects in Peer Reviews (2025)
- Factual consistency evaluation of summarization in the Era of large language models (2024)
- Automatically Evaluating the Paper Reviewing Capability of Large Language Models (2025)
- ReviewEval: An Evaluation Framework for AI-Generated Reviews (2025)
- CLAIMCHECK: How Grounded are LLM Critiques of Scientific Papers? (2025)
- LongFaith: Enhancing Long-Context Reasoning in LLMs with Faithful Synthetic Data (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper