Papers
arxiv:2406.18587

Nomic Embed Vision: Expanding the Latent Space

Published on Jun 6, 2024
Authors:
,

Abstract

This technical report describes the training of nomic-embed-vision, a highly performant, open-code, open-weights image embedding model that shares the same latent space as nomic-embed-text. Together, nomic-embed-vision and nomic-embed-text form the first unified latent space to achieve high performance across vision, language, and multimodal tasks.

Community

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.18587 in a dataset README.md to link it from this page.

Spaces citing this paper 8

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.