mlx-community/QVQ-72B-Preview-4bit

This model was converted to MLX format from Qwen/QVQ-72B-Preview using mlx-vlm version 0.1.6. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/QVQ-72B-Preview-4bit --max-tokens 100 --temp 0.0
Downloads last month
71
Safetensors
Model size
11.5B params
Tensor type
FP16
·
U32
·
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for mlx-community/QVQ-72B-Preview-4bit

Base model

Qwen/Qwen2-VL-72B
Finetuned
(8)
this model

Collection including mlx-community/QVQ-72B-Preview-4bit