README.md exists but content is empty.
Downloads last month
114
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for NexesQuants/google_gemma-3-27b-it-qat-q4_0-unquantized-iMat-NXS-GGUF

Quantized
(10)
this model