Post
1419
We just released native support for
@SGLang
and
@vllm-project
in Inference Endpoints 🔥
Inference Endpoints is becoming the central place where you deploy high performance Inference Engines.
And that provides the managed infra for it. Instead of spending weeks configuring infrastructure, managing servers, and debugging deployment issues, you can focus on what matters most: your AI model and your users 🙌
Inference Endpoints is becoming the central place where you deploy high performance Inference Engines.
And that provides the managed infra for it. Instead of spending weeks configuring infrastructure, managing servers, and debugging deployment issues, you can focus on what matters most: your AI model and your users 🙌