Papers
arxiv:2504.10326

AlayaDB: The Data Foundation for Efficient and Effective Long-context LLM Inference

Published on Apr 14
· Submitted by YangshenDeng on Apr 17
Authors:
,
,
,
,
,
,
,
,

Abstract

AlayaDB is a cutting-edge vector database system natively architected for efficient and effective long-context inference for Large Language Models (LLMs) at AlayaDB AI. Specifically, it decouples the KV cache and attention computation from the LLM inference systems, and encapsulates them into a novel vector database system. For the Model as a Service providers (MaaS), AlayaDB consumes fewer hardware resources and offers higher generation quality for various workloads with different kinds of Service Level Objectives (SLOs), when comparing with the existing alternative solutions (e.g., KV cache disaggregation, retrieval-based sparse attention). The crux of AlayaDB is that it abstracts the attention computation and cache management for LLM inference into a query processing procedure, and optimizes the performance via a native query optimizer. In this work, we demonstrate the effectiveness of AlayaDB via (i) three use cases from our industry partners, and (ii) extensive experimental results on LLM inference benchmarks.

Community

Paper author Paper submitter
edited 8 days ago

🔥 We introduce AlayaDB, the first database system for KV cache and attention. The paper is accepted by SIGMOD 2025 industry track.

🔥We are AlayaDB.AI, a startup that focuses on data infrastructure in LLM era, including vector database and LLM inference systems. Our page: http://www.alayadb.tech/

🔥 AlayaDB is built on top of our open-sourced vector engine AlayaLite. AlayaLite is a coroutine-based vector database with high performance. https://github.com/AlayaDB-AI/AlayaLite

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

@librarian-bot recommend

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.10326 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.10326 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.10326 in a Space README.md to link it from this page.

Collections including this paper 3