It is the GGUF converted version of the Google Gemma3 27B IT model (https://huggingface.co/google/gemma-3-27b-it).

Base Model: google/gemma-3-27b-it

Format: GGUF

Quantization: 8-bit (for efficient inference with relative accuracy)

Intended Use: Chatbots, text generation, creative writing, question-answering, just to mention a few, in 140 languages including Hungarian.

Downloads last month
49
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Tamas05/gemma-3-27b-it-8bit-gguf

Quantized
(89)
this model