ValueError: This text model is incompatible with llama.cpp!
raise ValueError(f"This text model is incompatible with llama.cpp!\nConsider using the safetensors version\n({path})")
while loading qwen_2.5_vl-q6_k.gguf with CLIPLoader (GGUF) provided by ComfyUI-GGUF.
btw, how many params this qwen?
gguf tensor is very simple and transparent; you can read it with basic parser like reading a json/txt file; don't even need torch; hf has a built in slide window reader, just click the quant type on right side then you will get the full tensor info of that gguf file; guess you are not really asking for the parameters, since you are not using the right node; gguf and comfyui-gguf are not the same node, though they look alike, the engine and code base are different; it's not supported pig quant gguf encoder and gguf vae yet, at least at this moment, you might need to check out the last item from references for a right node recently
I switched into gguf node and it works correctly. thanks.
and from size, is it qwen2.5-vl-4b? does qwen2.5-vl-7b also work?