Establishing Trustworthy LLM Evaluation via Shortcut Neuron Analysis
Abstract
A method called shortcut neuron patching identifies and suppresses shortcut neurons in language models to mitigate data contamination issues in trustworthy evaluations.
The development of large language models (LLMs) depends on trustworthy evaluation. However, most current evaluations rely on public benchmarks, which are prone to data contamination issues that significantly compromise fairness. Previous researches have focused on constructing dynamic benchmarks to address contamination. However, continuously building new benchmarks is costly and cyclical. In this work, we aim to tackle contamination by analyzing the mechanisms of contaminated models themselves. Through our experiments, we discover that the overestimation of contaminated models is likely due to parameters acquiring shortcut solutions in training. We further propose a novel method for identifying shortcut neurons through comparative and causal analysis. Building on this, we introduce an evaluation method called shortcut neuron patching to suppress shortcut neurons. Experiments validate the effectiveness of our approach in mitigating contamination. Additionally, our evaluation results exhibit a strong linear correlation with MixEval, a recently released trustworthy benchmark, achieving a Spearman coefficient (rho) exceeding 0.95. This high correlation indicates that our method closely reveals true capabilities of the models and is trustworthy. We conduct further experiments to demonstrate the generalizability of our method across various benchmarks and hyperparameter settings. Code: https://github.com/GaryStack/Trustworthy-Evaluation
Community
Accepted to ACL 2025 Main Conference
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Model Utility Law: Evaluating LLMs beyond Performance through Mechanism Interpretable Metric (2025)
- BiasFilter: An Inference-Time Debiasing Framework for Large Language Models (2025)
- FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering (2025)
- Teach2Eval: An Indirect Evaluation Method for LLM by Judging How It Teaches (2025)
- Truth Neurons (2025)
- J1: Exploring Simple Test-Time Scaling for LLM-as-a-Judge (2025)
- Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper