Papers
arxiv:2506.18898

Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations

Published on Jun 23
· Submitted by csuhan on Jun 24
Authors:
,
,
,
,
,

Abstract

A multimodal framework uses a Text-Aligned Tokenizer (TA-Tok) to integrate vision and text into a unified space, employing a generative de-tokenizer with autoregressive and diffusion-based models for efficient and high-fidelity visual outputs.

AI-generated summary

This paper presents a multimodal framework that attempts to unify visual understanding and generation within a shared discrete semantic representation. At its core is the Text-Aligned Tokenizer (TA-Tok), which converts images into discrete tokens using a text-aligned codebook projected from a large language model's (LLM) vocabulary. By integrating vision and text into a unified space with an expanded vocabulary, our multimodal LLM, Tar, enables cross-modal input and output through a shared interface, without the need for modality-specific designs. Additionally, we propose scale-adaptive encoding and decoding to balance efficiency and visual detail, along with a generative de-tokenizer to produce high-fidelity visual outputs. To address diverse decoding needs, we utilize two complementary de-tokenizers: a fast autoregressive model and a diffusion-based model. To enhance modality fusion, we investigate advanced pre-training tasks, demonstrating improvements in both visual understanding and generation. Experiments across benchmarks show that Tar matches or surpasses existing multimodal LLM methods, achieving faster convergence and greater training efficiency. Code, models, and data are available at https://tar.csuhan.com

Community

Paper author Paper submitter

Project Page: https://tar.csuhan.com

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.18898 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.18898 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 4