Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models
Abstract
Two methods, Temporal Self-Consistency Voting and Temporal Consistency Reinforcement, improve diffusion large language models by leveraging temporal consistency in intermediate predictions.
Diffusion large language models (dLLMs) generate text through iterative denoising, yet current decoding strategies discard rich intermediate predictions in favor of the final output. Our work here reveals a critical phenomenon, temporal oscillation, where correct answers often emerge in the middle process, but are overwritten in later denoising steps. To address this issue, we introduce two complementary methods that exploit temporal consistency: 1) Temporal Self-Consistency Voting, a training-free, test-time decoding strategy that aggregates predictions across denoising steps to select the most consistent output; and 2) a post-training method termed Temporal Consistency Reinforcement, which uses Temporal Semantic Entropy (TSE), a measure of semantic stability across intermediate predictions, as a reward signal to encourage stable generations. Empirical results across multiple benchmarks demonstrate the effectiveness of our approach. Using the negative TSE reward alone, we observe a remarkable average improvement of 24.7% on the Countdown dataset over an existing dLLM. Combined with the accuracy reward, we achieve absolute gains of 2.0% on GSM8K, 4.3% on MATH500, 6.6% on SVAMP, and 25.3% on Countdown, respectively. Our findings underscore the untapped potential of temporal dynamics in dLLMs and offer two simple yet effective tools to harness them.
Community
This work uncovers temporal oscillation in diffusion large language models (dLLMs), where correct answers appear mid-process but are later lost. To harness this, the authors propose Temporal Self-Consistency Voting and Temporal Consistency Reinforcement, boosting performance across multiple benchmarks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation (2025)
- Beyond Fixed: Variable-Length Denoising for Diffusion Large Language Models (2025)
- wd1: Weighted Policy Optimization for Reasoning in Diffusion Language Models (2025)
- DIFFA: Large Language Diffusion Models Can Listen and Understand (2025)
- EDGE-GRPO: Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity (2025)
- SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning (2025)
- UloRL:An Ultra-Long Output Reinforcement Learning Approach for Advancing Large Language Models'Reasoning Abilities (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper