Papers
arxiv:2505.14684

Mind the Gap: Bridging Thought Leap for Improved Chain-of-Thought Tuning

Published on May 20
· Submitted by yanyc on May 23
Authors:
,
,
,
,
,
,
,
,
,

Abstract

A model for detecting and generating missing intermediate steps in mathematical Chain-of-Thought reasoning improves performance and generalization on mathematical and logical reasoning tasks.

AI-generated summary

Large language models (LLMs) have achieved remarkable progress on mathematical tasks through Chain-of-Thought (CoT) reasoning. However, existing mathematical CoT datasets often suffer from Thought Leaps due to experts omitting intermediate steps, which negatively impacts model learning and generalization. We propose the CoT Thought Leap Bridge Task, which aims to automatically detect leaps and generate missing intermediate reasoning steps to restore the completeness and coherence of CoT. To facilitate this, we constructed a specialized training dataset called ScaleQM+, based on the structured ScaleQuestMath dataset, and trained CoT-Bridge to bridge thought leaps. Through comprehensive experiments on mathematical reasoning benchmarks, we demonstrate that models fine-tuned on bridged datasets consistently outperform those trained on original datasets, with improvements of up to +5.87% on NuminaMath. Our approach effectively enhances distilled data (+3.02%) and provides better starting points for reinforcement learning (+3.1%), functioning as a plug-and-play module compatible with existing optimization techniques. Furthermore, CoT-Bridge demonstrate improved generalization to out-of-domain logical reasoning tasks, confirming that enhancing reasoning completeness yields broadly applicable benefits.

Community

Paper submitter

We are happy to introduce CoT-Bridge, a novel framework designed to address the issue of Thought Leaps in CoT reasoning by automatically detecting and generating missing intermediate steps, resulting in improved model learning, generalization, and performance on various mathematical and logical reasoning tasks.

Sign up or log in to comment

Models citing this paper 7

Browse 7 models citing this paper

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.14684 in a Space README.md to link it from this page.

Collections including this paper 1