No description provided.

I don't think VLLM can inference those binaries, gguf is the ggml/llama.cpp format

This is for vision LLMs, not the vllm library, we'll change the wording to be clearer

cmp-nct changed pull request status to merged
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment