Chain-of-Verification Reduces Hallucination in Large Language Models Paper • 2309.11495 • Published Sep 20, 2023 • 39
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training Paper • 2410.15460 • Published Oct 20, 2024 • 1
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations Paper • 2410.18860 • Published Oct 24, 2024 • 11
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models Paper • 2411.14257 • Published Nov 21, 2024 • 13
Linear Correlation in LM's Compositional Generalization and Hallucination Paper • 2502.04520 • Published Feb 6 • 11
The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering Paper • 2502.03628 • Published Feb 5 • 12
When an LLM is apprehensive about its answers -- and when its uncertainty is justified Paper • 2503.01688 • Published Mar 3 • 21
LettuceDetect: A Hallucination Detection Framework for RAG Applications Paper • 2502.17125 • Published Feb 24 • 11