---
base_model: google/gemma-3n-E4B-it
language:
- en
pipeline_tag: image-text-to-text
library_name: transformers
license: gemma
tags:
- gemma3
- unsloth
- transformers
- gemma
- google
---
Learn how to run & fine-tune Gemma 3n correctly - Read our Guide.
See our collection for all versions of Gemma 3n including GGUF, 4-bit & 16-bit formats.
Unsloth Dynamic 2.0 achieves SOTA accuracy & performance versus other quants.
✨ Gemma 3n Usage Guidelines
- Currently **only text** is supported.
- Ollama: `ollama run hf.co/unsloth/gemma-3n-E4B-it-GGUF:Q4_K_XL` - auto-sets correct chat template and settings
- Set temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
- Gemma 3n max tokens (context length): 32K. Gemma 3n chat template:
```
user\nHello!\nmodel\nHey there!\nuser\nWhat is 1+1?\nmodel\n
```
- For complete detailed instructions, see our [step-by-step guide](https://docs.unsloth.ai/basics/gemma-3n).
# 🦥 Fine-tune Gemma 3n with Unsloth
- Fine-tune Gemma 3n (4B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
- Read our Blog about Gemma 3n support: [unsloth.ai/blog/gemma-3n](https://unsloth.ai/blog/gemma-3n)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma-3n-E4B** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less |
| **GRPO with Gemma 3 (1B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(1B)-GRPO.ipynb) | 2x faster | 80% less |
| **Gemma 3 (4B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) | 2x faster | 60% less |
| **Qwen3 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) | 2x faster | 60% less |
| **DeepSeek-R1-0528-Qwen3-8B (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/DeepSeek_R1_0528_Qwen3_(8B)_GRPO.ipynb) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
# Gemma-3n-E4B model card