Papers
arxiv:2506.18903

VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed View Memory

Published on Jun 23
· Submitted by liguang0115 on Jun 24

Abstract

A novel memory mechanism called Surfel-Indexed View Memory enhances video generation by efficiently remembering and retrieving relevant past views, improving long-term scene coherence and reducing computational cost.

AI-generated summary

We propose a novel memory mechanism to build video generators that can explore environments interactively. Similar results have previously been achieved by out-painting 2D views of the scene while incrementally reconstructing its 3D geometry, which quickly accumulates errors, or by video generators with a short context window, which struggle to maintain scene coherence over the long term. To address these limitations, we introduce Surfel-Indexed View Memory (VMem), a mechanism that remembers past views by indexing them geometrically based on the 3D surface elements (surfels) they have observed. VMem enables the efficient retrieval of the most relevant past views when generating new ones. By focusing only on these relevant views, our method produces consistent explorations of imagined environments at a fraction of the computational cost of using all past views as context. We evaluate our approach on challenging long-term scene synthesis benchmarks and demonstrate superior performance compared to existing methods in maintaining scene coherence and camera control.

Community

Paper author Paper submitter

project page: https://v-mem.github.io/

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.18903 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 3