GGUF quants (with MMPROJ) of UI-TARS-1.5-7B

Model Size
mmproj 1.32 GB
Q4_K_M 4.57 GB
Q6_K 6.11 GB
Q8_0 7.91 GB
F16 14.88 GB
Downloads last month
134
GGUF
Model size
7.62B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for adriabama06/UI-TARS-1.5-7B-GGUF

Quantized
(9)
this model