Papers
arxiv:2503.21544

SWI: Speaking with Intent in Large Language Models

Published on Mar 27
ยท Submitted by yuweiyin on Mar 31
Authors:
,

Abstract

Intent, typically clearly formulated and planned, functions as a cognitive framework for reasoning and problem-solving. This paper introduces the concept of Speaking with Intent (SWI) in large language models (LLMs), where the explicitly generated intent encapsulates the model's underlying intention and provides high-level planning to guide subsequent analysis and communication. By emulating deliberate and purposeful thoughts in the human mind, SWI is hypothesized to enhance the reasoning capabilities and generation quality of LLMs. Extensive experiments on mathematical reasoning benchmarks consistently demonstrate the superiority of Speaking with Intent over Baseline (i.e., generation without explicit intent). Moreover, SWI outperforms answer-trigger prompting methods Chain-of-Thought and Plan-and-Solve and maintains competitive performance with the strong method ARR (Analyzing, Retrieving, and Reasoning). Additionally, the effectiveness and generalizability of SWI are solidified on reasoning-intensive question answering (QA) and text summarization benchmarks, where SWI brings consistent improvement to the Baseline generation. In text summarization, SWI-generated summaries exhibit greater accuracy, conciseness, and factual correctness, with fewer hallucinations. Furthermore, human evaluations verify the coherence, effectiveness, and interpretability of the intent produced by SWI. This proof-of-concept study creates a novel avenue for enhancing LLMs' reasoning abilities with cognitive notions.

Community

Paper author Paper submitter

๐Ÿ”ฌ SWI: Speaking with Intent in Large Language Models
Paper: https://arxiv.org/abs/2503.21544
Code: https://github.com/YuweiYin/SWI

1โƒฃ We introduce Speaking with Intent (SWI) in LLMs, requiring the models to articulate their own intent as a planning mechanism during generation, such that the generated intent effectively guides problem analysis and logical reasoning.
2โƒฃ Extensive experiments across diverse reasoning and generation tasks, including mathematical reasoning, multiple-choice QA, and text summarization, demonstrate the effectiveness and generalizability of SWI.
3โƒฃ SWI produces more accurate, concise, and factually reliable summaries with fewer hallucinations. Furthermore, human evaluations validate the coherence, effectiveness, and interpretability of the intent generated by SWI.

๐Ÿ’ก Overall, this study creates a novel avenue for enhancing LLMs' reasoning abilities with cognitive notions. ๐Ÿง  As intent is a fundamental aspect of natural language processing, empowering, eliciting, and enhancing the intent understanding and generation abilities can potentially drive AI systems to the next level. ๐Ÿš€

swi.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.21544 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.21544 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.21544 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.