Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations
Abstract
A method integrates large language models into formal semantics for paraconsistent logic, preserving logical soundness and completeness while leveraging LLM knowledge.
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs' broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neuro-symbolic reasoning that leverages an LLM's knowledge while preserving the underlying logic's soundness and completeness properties.
Community
This paper takes the stance that, instead of getting LLMs to reason according to the precepts of classical logic, we can get paraconsistent logics to integrate LLMs in a manner that preserves their soundness and completeness, despite LLMs' inherent inconsistency and incompleteness.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper