vLLM
vLLM is a high-performance, memory-efficient inference engine for open-source LLMs.
It delivers efficient scheduling, KV-cache handling, batching, and decoding—all wrapped in a production-ready server.
Core features:
- PagedAttention for memory efficiency
- Continuous batching
- Optimized CUDA/HIP execution:
- Speculative decoding & chunked prefill:
- Multi-backend and hardware support: Runs across NVIDIA, AMD, and AWS Neuron to name a few
Configuration

- Max Number of Sequences: The maximum number of sequences (requests) that can be processed together in a single batch. Controls
the batch size by sequence count, affecting throughput and memory usage. For example, if max_num_seqs=8, up to 8 different prompts can
be handled at once, regardless of their individual lengths, as long as the total token count also fits within the Max Number of Batched Tokens.
- Max Number of Batched Tokens: The maximum total number of tokens (summed across all sequences) that can be processed in a single
batch. Limits batch size by token count, balancing throughput and GPU memory allocation.
- Tensor Parallel Size: The number of GPUs across which model weights are split within each layer. Increasing this allows larger
models to run and frees up GPU memory for KV cache, but may introduce synchronization overhead.
- KV Cache DType: the data type used for storing the key-value cache during generation. Options include “auto”, “fp8”, “fp8_e5m2”
and”fp8_e4m3”. Using lower precision types can reduce memory usage but may slightly impact generation quality.
< > Update on GitHub