Papers
arxiv:2404.12096

LongEmbed: Extending Embedding Models for Long Context Retrieval

Published on Apr 18
Authors:
,
,
,
,

Abstract

Embedding models play a pivot role in modern NLP applications such as IR and RAG. While the context limit of LLMs has been pushed beyond 1 million tokens, embedding models are still confined to a narrow context window not exceeding 8k tokens, refrained from application scenarios requiring long inputs such as legal contracts. This paper explores context window extension of existing embedding models, pushing the limit to 32k without requiring additional training. First, we examine the performance of current embedding models for long context retrieval on our newly constructed LongEmbed benchmark. LongEmbed comprises two synthetic tasks and four carefully chosen real-world tasks, featuring documents of varying length and dispersed target information. Benchmarking results underscore huge room for improvement in these models. Based on this, comprehensive experiments show that training-free context window extension strategies like position interpolation can effectively extend the context window of existing embedding models by several folds, regardless of their original context being 512 or beyond 4k. Furthermore, for models employing absolute position encoding (APE), we show the possibility of further fine-tuning to harvest notable performance gains while strictly preserving original behavior for short inputs. For models using rotary position embedding (RoPE), significant enhancements are observed when employing RoPE-specific methods, such as NTK and SelfExtend, indicating RoPE's superiority over APE for context window extension. To facilitate future research, we release E5-Base-4k and E5-RoPE-Base, along with the LongEmbed benchmark.

Community

This looks quite promising. I'm looking forward to running the LongEmbed benchmark via MTEB myself.

ยท
Paper author

Thanks! We have incorporated LongEmbed into MTEB since version 1.6.22, so I believe the evaluation process will be very convenient. See the PR here for reference: https://github.com/embeddings-benchmark/mteb/pull/393#issuecomment-2071944535

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 8

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.