Versione quantizzata per MLX di https://huggingface.co/Almawave/Velvet-14B usando mlx_lm.convert, occupa 7.93GB e funziona molto bene sui Mac Silicon con 16GB di RAM

Downloads last month
20
Safetensors
Model size
2.2B params
Tensor type
FP16
·
U32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for fabiolecca/almawave-Velvet-14B-MLX

Quantized
(2)
this model