SGLang

SGLang is a fast serving framework for large language models and vision language models. It’s very similar to TGI and vLLM and comes with production ready features.

The core features include:

Configuration

sglang

For more advanced configuration you can pass any of the Server Arguments that SGlang supports as container arguments. For example changing the schedule-policy to lpm would look like this:

sglang-advanced

Supported models

SGlang has wide support for large language models, multimodal language models, embedding models and more. We recommend reading the supported models section in the SGLang documentation for a full list.

In the Inference Endpoints UI, by default, any model on the Hugging Face Hub that has a transformers tag, can be deployed with SGLang. This is because SGLang implements a fallback to use transformers if SGLang doesn’t have their own implementation of a model.

References

We also recommend reading the SGLang documentation for more in-depth information.

< > Update on GitHub