Qwen2.5-VL-7B-Instruct

Converted and quantized using HimariO's fork using this procedure. No IMatrix.

The fork is currently required to run inference and there's no guarantee these checkpoints will work with future builds. Temporary builds are available here. The latest tested build as of writing is qwen25-vl-b4899-bc4163b.

Original model

Usage

./llama-qwen2vl-cli -m Qwen2.5-VL-7B-Instruct-Q4_K_M.gguf --mmproj qwen2.5-vl-7b-instruct-vision-f16.gguf -p "Please describe this image." --image ./image.jpg
Downloads last month
4,059
GGUF
Model size
7.62B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for samgreen/Qwen2.5-VL-7B-Instruct-GGUF

Quantized
(29)
this model