mlx-community/gemma-3-4b-it-qat-4bit

This model was converted to MLX format from /Volumes/T7/Models/hf-models/gemma-3-4b-it-qat-q4_0-unquantized using mlx-vlm version 0.1.25. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/gemma-3-4b-it-qat-4bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
311
Safetensors
Model size
893M params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/gemma-3-4b-it-qat-4bit

Finetuned
(18)
this model

Dataset used to train mlx-community/gemma-3-4b-it-qat-4bit

Collection including mlx-community/gemma-3-4b-it-qat-4bit