Papers
arxiv:2410.12222

On A Scale From 1 to 5: Quantifying Hallucination in Faithfulness Evaluation

Published on Oct 16, 2024
Authors:
,

Abstract

Hallucination has been a popular topic in natural language generation (NLG). In real-world applications, unfaithful content can result in poor data quality or loss of trust from end users. Thus, it is crucial to fact-check before adopting NLG for production usage, which can be expensive if done manually. In this paper, we investigate automated faithfulness evaluation in guided NLG. We developed a rubric template and used large language models (LLMs) to score the generation on quantifiable scales. We compared popular LLMs as well as widely adopted natural language inference (NLI) models in scoring quality and sensitivity. In addition, we developed methods for the generation of synthetic unfaithful data, as well as heuristics to quantify the percentage of hallucination. Our results on 4 travel-domain industry dataset show that GPT-4 can provide accurate judgement and explanation of whether a source and a generation are factually consistent. Furthermore, we found that tuning NLI models on synthetic data can improve performance. Lastly, we present insights on the latency and cost of deploying such a system.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.12222 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.12222 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.12222 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.