SGLang
Configuration
- Max Running Request: the max number of concurrent requests
- Max Prefill Tokens (per batch): the maximum number of tokens that can be processed in a single prefill operation. This controls the batch size for the prefill phase and helps manage memory usage during prompt processing.
- Chunked prefill size: sets how many tokens are processed at once during the prefill phase. If a prompt is longer than this value,
it will be split into smaller chunks and processed sequentially to avoid out-of-memory errors during prefill with long prompts.
For example, setting —chunked-prefill-size 4096 means each chunk will have up to 4096 tokens processed at a time. Setting this to -1
means disabling chunked prefill.
- Tensor Parallel Size: the number of GPUs to use for tensor parallelism. This enables model sharding across multiple GPUs
to handle larger models that don’t fit on a single GPU. For example, setting this to 2 will split the model across 2 GPUs.
- KV Cache DType: the data type used for storing the key-value cache during generation. Options include “auto”, “fp8_e5m2”
and”fp8_e4m3”. Using lower precision types can reduce memory usage but may slightly impact generation quality.
< > Update on GitHub