Issue with vLLM serve
#8
by
SagefSixPaths
- opened
The given vllm command provided doesn't seem to be working
vllm serve sarvamai/sarvam-translate --port 8000 --dtype bfloat16
It gives the following error:ValueError: There is no module or parameter named 'model' in Gemma3ForConditionalGeneration
Same error persists across different versions of vllm openai compatible docker v0.8.4, v0.8.5.post1, v0.9.0, v0.9.0.1
. Any fixes or workarounds for this issue would be of great help!
Thanks for reporting the issue. We have fixed the checkpoint now.
Please try again and let us know if you're still facing issues.
GokulNC
changed discussion status to
closed