SWI: Speaking with Intent in Large Language Models
Abstract
Intent, typically clearly formulated and planned, functions as a cognitive framework for reasoning and problem-solving. This paper introduces the concept of Speaking with Intent (SWI) in large language models (LLMs), where the explicitly generated intent encapsulates the model's underlying intention and provides high-level planning to guide subsequent analysis and communication. By emulating deliberate and purposeful thoughts in the human mind, SWI is hypothesized to enhance the reasoning capabilities and generation quality of LLMs. Extensive experiments on mathematical reasoning benchmarks consistently demonstrate the superiority of Speaking with Intent over Baseline (i.e., generation without explicit intent). Moreover, SWI outperforms answer-trigger prompting methods Chain-of-Thought and Plan-and-Solve and maintains competitive performance with the strong method ARR (Analyzing, Retrieving, and Reasoning). Additionally, the effectiveness and generalizability of SWI are solidified on reasoning-intensive question answering (QA) and text summarization benchmarks, where SWI brings consistent improvement to the Baseline generation. In text summarization, SWI-generated summaries exhibit greater accuracy, conciseness, and factual correctness, with fewer hallucinations. Furthermore, human evaluations verify the coherence, effectiveness, and interpretability of the intent produced by SWI. This proof-of-concept study creates a novel avenue for enhancing LLMs' reasoning abilities with cognitive notions.
Community
๐ฌ SWI: Speaking with Intent in Large Language Models
Paper: https://arxiv.org/abs/2503.21544
Code: https://github.com/YuweiYin/SWI
1โฃ We introduce Speaking with Intent (SWI) in LLMs, requiring the models to articulate their own intent as a planning mechanism during generation, such that the generated intent effectively guides problem analysis and logical reasoning.
2โฃ Extensive experiments across diverse reasoning and generation tasks, including mathematical reasoning, multiple-choice QA, and text summarization, demonstrate the effectiveness and generalizability of SWI.
3โฃ SWI produces more accurate, concise, and factually reliable summaries with fewer hallucinations. Furthermore, human evaluations validate the coherence, effectiveness, and interpretability of the intent generated by SWI.
๐ก Overall, this study creates a novel avenue for enhancing LLMs' reasoning abilities with cognitive notions. ๐ง As intent is a fundamental aspect of natural language processing, empowering, eliciting, and enhancing the intent understanding and generation abilities can potentially drive AI systems to the next level. ๐
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ARR: Question Answering with Large Language Models via Analyzing, Retrieving, and Reasoning (2025)
- SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs (2025)
- IAO Prompting: Making Knowledge Flow Explicit in LLMs through Structured Reasoning Templates (2025)
- SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models (2025)
- CER: Confidence Enhanced Reasoning in LLMs (2025)
- Chain of Draft: Thinking Faster by Writing Less (2025)
- MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper