Gemma-3-Gaia-PT-BR-4b-it - GGUF Quantized by brunoconterato
This repository contains GGUF quantized versions of the Gemma-3-Gaia-PT-BR-4b-it model, specifically optimized for use with Ollama.
Model Details:
- Base Model: Gemma-3-Gaia-PT-BR-4b-it
- Quantization: [Ex: Q4_K_M (4-bit)]
- Original Source: [Link para o modelo original no Hugging Face, se houver]
- Developer: brunoconterato
License Information
This model is provided under and subject to the Gemma Terms of Use. By downloading or using this model, you agree to be bound by these terms.
Key obligations include:
- Compliance with the Gemma Prohibited Use Policy.
- Providing a copy of the Gemma Terms of Use to any third-party recipients.
- Prominent notice that this is a modified (quantized) version.
Please ensure you read and understand the full terms:
A copy of the "Gemma Terms of Use" is also included in this repository as Gemma_Terms_of_Use.txt
.
How to use with Ollama
To run this model with Ollama, first ensure you have Ollama installed and running. Then, you can pull this model:
ollama run hf.co/brunoconterato/brunoconterato/Gemma-3-Gaia-PT-BR-4b-it-GGUF-F16
---
- Downloads last month
- 42
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for brunoconterato/Gemma-3-Gaia-PT-BR-4b-it-GGUF-F16
Base model
google/gemma-3-4b-pt
Finetuned
CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it