NOTICE : vLLM is now available.
#31
by
DongHyunKim
- opened
After a major code update, we finally support vLLM.
https://github.com/NAVER-Cloud-HyperCLOVA-X/vllm/tree/v0.9.2rc2_hyperclovax_vision_seed
Thank you for contribution @bigshanedogg .
Make sure to switch to the v0.9.2rc2_hyperclovax_vision_seed
branch.
For more details, check out the README in our vllm repository.
Launch API server:
Request Example:
Offline Inference Examples:
- https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/vision_language.py
- https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/vision_language_multi_image.py
Thank you for your continued interest and support for our model.
HyperCLOVA Team.
DongHyunKim
changed discussion title from
vLLM is now available.
to NOTICE: vLLM is now available.
DongHyunKim
changed discussion title from
NOTICE: vLLM is now available.
to NOTICE : vLLM is now available.