Papers
arxiv:2506.00795

Bridging Supervised and Temporal Difference Learning with Q-Conditioned Maximization

Published on Jun 1
Authors:
,
,
,
,
,
,

Abstract

GCReinSL enhances supervised learning for offline goal-conditioned RL by integrating Q-function estimation and maximization, improving trajectory stitching and performance.

AI-generated summary

Recently, supervised learning (SL) methodology has emerged as an effective approach for offline reinforcement learning (RL) due to their simplicity, stability, and efficiency. However, recent studies show that SL methods lack the trajectory stitching capability, typically associated with temporal difference (TD)-based approaches. A question naturally surfaces: How can we endow SL methods with stitching capability and bridge its performance gap with TD learning? To answer this question, we introduce Q-conditioned maximization supervised learning for offline goal-conditioned RL, which enhances SL with the stitching capability through Q-conditioned policy and Q-conditioned maximization. Concretely, we propose Goal-Conditioned Reinforced Supervised Learning (GCReinSL), which consists of (1) estimating the Q-function by CVAE from the offline dataset and (2) finding the maximum Q-value within the data support by integrating Q-function maximization with Expectile Regression. In inference time, our policy chooses optimal actions based on such a maximum Q-value. Experimental results from stitching evaluations on offline RL datasets demonstrate that our method outperforms prior SL approaches with stitching capabilities and goal data augmentation techniques.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.00795 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.00795 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.00795 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.