Qwen2.5-VL-7B-Instruct
Converted and quantized using HimariO's fork using this procedure. No IMatrix.
The fork is currently required to run inference and there's no guarantee these checkpoints will work with future builds. Temporary builds are available here. The latest tested build as of writing is qwen25-vl-b4899-bc4163b
.
Usage
./llama-qwen2vl-cli -m Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf --mmproj qwen2.5-vl-7b-instruct-vision-f16.gguf -p "Please describe this image." --image ./image.jpg
- Downloads last month
- 4,059
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
16-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for samgreen/Qwen2.5-VL-7B-Instruct-GGUF
Base model
Qwen/Qwen2.5-VL-7B-Instruct