config.json and other files are missing, causing vllm to fail to run.

#1
by Baicai - opened

I appreciate your contribution, but I'm missing necessary files to run it. Could you please check if any steps were omitted?

I didn't know vLLM supports running a GGUF model directly from a model handle. Do you have an example repo where there are necessary files? I will add them.

I tried running vLLM on this GGUF file locally, but it seemed like vLLM doesn't support GGUF Gemma3 yet.

Sign up or log in to comment