Running with VLLM is not printing '<think>' tokens.

#8
by TalonOne - opened

I'm trying the Phi-4-reasoning-plus model using vllm's v0.8.5 docker container and using the --enable-reasoning --reasoning-parser deepseek_r1 flags but it's not printing <think>or </think>. Am I doing something wrong? Ollama runs it just fine so I'm wondering what I'm missing.

Excellent model by the way!

Thanks,

Same for me using SGL. The model just starts thinking and never terminates with </think>, so it gets stuck repeating the same few reasoning sentences over and over. I'm using the standard prompt shown in the model card.

I could not get the part working either

Well, here's where I come back and let folks know the problem was between the keyboard and the chair (PBKAC).

The multi line docker string I was sending had an inadvertent ' ' after a line continuation, which in docker causes the rest of the options to not pass to vllm.

If you spawn your container like this i'll work

docker run --runtime nvidia --gpus '"device=0,1"' \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=hf_redactedY" \
    --net=host \
    --ipc=host \
    vllm/vllm-openai:v0.8.5 \
    --model microsoft/Phi-4-reasoning-plus \
    --served-model-name Phi-4-reasoning-plus \
    --generation-config vllm \
    --enable-reasoning \
    --reasoning-parser deepseek_r1 

However, that said, I have seen vllm just stop printing tokens mid answer. Ollama is fine though. I'll also say that running through vllm seems to have less precision than running via Ollama. Same prompts, same temps, and topk/p but less precise answers. Almost like the context cache is not as effective with vllm as opposed to Ollama. I'm not sure why, but I've switched over to Ollama because of these two issues with vllm.

I don't think this is a VLLM issue, on KoboldCpp I haven't gotten the model to reason correctly either despite us having no issues with any other reasoning models.
Exact same behavior where is not sent or if forced the is never sent. Result is either no reasoning at all or infinite reasoning.

Same problem with llama.cpp directly

Sign up or log in to comment