vLLM

vLLM is a high-performance, memory-efficient inference engine for open-source LLMs. It delivers efficient scheduling, KV-cache handling, batching, and decoding—all wrapped in a production-ready server. For most use cases, TGI, vLLM, and SGLang will be equivalently good options.

Core features:

Configuration

config

For more advanced configuration you can pass any of the Engine Arguments that vLLM supports as container arguments. For example changing the enable_lora to true would look like this:

vllm-advanced

Supported models

vLLM has wide support for large language models and embedding models. We recommend reading the supported models section in the vLLM documentation for a full list.

vLLM also supports model implementations that are available in Transformers. Currently not all models work but support is planned for most decoder language models are supported, and vision language models.

References

We also recommend reading the vLLM documentation for more in-depth information.

< > Update on GitHub