Excited to see this model available in VLLM!

#4
by xupeng1023 - opened

The model performance looks pretty awesome given the size of the model. If it supported in vllm it will be great for people to try and really use it.

This model is already supported in Vllm. For example, you can run

python3 -m vllm.entrypoints.openai.api_server --model ServiceNow-AI/Apriel-Nemotron-15b-Thinker --dtype auto --tensor-parallel-size 1 --served-model-name apriel_15b --max-logprobs 10 --disable-log-requests --gpu-memory-utilization 0.95

Sign up or log in to comment