HALoGEN: Fantastic LLM Hallucinations and Where to Find Them Paper • 2501.08292 • Published Jan 14 • 17
SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models Paper • 2502.09604 • Published Feb 13 • 36
MetaFaith: Faithful Natural Language Uncertainty Expression in LLMs Paper • 2505.24858 • Published 9 days ago • 17