Gemma-3-Gaia-PT-BR-4b-it - GGUF Quantized by brunoconterato

This repository contains GGUF quantized versions of the Gemma-3-Gaia-PT-BR-4b-it model, specifically optimized for use with Ollama.

Model Details:

  • Base Model: Gemma-3-Gaia-PT-BR-4b-it
  • Quantization: [Ex: Q4_K_M (4-bit)]
  • Original Source: [Link para o modelo original no Hugging Face, se houver]
  • Developer: brunoconterato

License Information

This model is provided under and subject to the Gemma Terms of Use. By downloading or using this model, you agree to be bound by these terms.

Key obligations include:

Please ensure you read and understand the full terms:

A copy of the "Gemma Terms of Use" is also included in this repository as Gemma_Terms_of_Use.txt.

How to use with Ollama

To run this model with Ollama, first ensure you have Ollama installed and running. Then, you can pull this model:

ollama run hf.co/brunoconterato/brunoconterato/Gemma-3-Gaia-PT-BR-4b-it-GGUF-F16

---
Downloads last month
42
GGUF
Model size
3.88B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for brunoconterato/Gemma-3-Gaia-PT-BR-4b-it-GGUF-F16

Quantized
(8)
this model