Text Generation Inference (TGI)

TGI is a production-grade inference engine built in Rust and Python, designed for high-performance serving of open-source LLMs (e.g. LLaMA, Falcon, StarCoder, BLOOM and many more). The core features that make TGI a good choice are:

By default, the TGI version will be the latest available one (with some delay). But you can also specify a different version by changing the container URL

Configuration

When selecting a model to deploy, the Inference Endpoints UI automatically checks whether a model is supported by TGI. If it is, you’ll see the option presented under Container Configuration where you can change the following settings:

config

In general zero-configuration (see below) is recommended for most cases. TGI supports several other configuration parameters and you’ll find a complete list in the TGI documentation. These can all be set by passing the values as environment variables to the container, link to guide.

Zero configuration

Introduced in TGI v3, the zero-config mode helps you get the most out of your hardware without manual configuration and trial & error. If you leave the values undefined, TGI will on server startup automatically (based on the hardware it’s running on) select the maximal possible values for the max input lenght, max number of tokens, max batch prefill tokens and max batch total tokens. This means that you’ll use your hardware to it’s full capacity.

Note that there's a caveat: say you're deploying `meta-llama/Llama-3.3-70B-Instruct`, which has a context length of 128k tokens. But you're on a GPU where you can only fit the model's context three times in memory. So if you want to serve the model with full context length, you can only serve up to 3 concurrent requests. In some cases, it's fine to drop the maximum context length to 64k tokens, which would allow the server to process 6 concurrent requests. You can configure this by setting max input length to 64k and then let TGI auto-configure the rest.

Supported models

You can find the models that are supported by TGI:

If a model is supported by TGI, the Inference Endpoints UI will indicate this by disabling/enabling the selection under Container Type configuration. selection

References

We also recommend reading the TGI documentation for more in-depth information.

< > Update on GitHub