metadata
base_model: google/gemma-3n-E4B-it
language:
- en
pipeline_tag: image-text-to-text
library_name: transformers
license: gemma
tags:
- gemma3
- unsloth
- transformers
- gemma
- google
Learn how to run & fine-tune Gemma 3n correctly - Read our Guide.
See our collection for all versions of Gemma 3n including GGUF, 4-bit & 16-bit formats.
Unsloth Dynamic 2.0 achieves SOTA accuracy & performance versus other quants.
✨ Gemma 3n Usage Guidelines
- Currently only text is supported.
- Ollama:
ollama run hf.co/unsloth/gemma-3n-E4B-it-GGUF:Q4_K_XL
- auto-sets correct chat template and settings - Set temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
- Gemma 3n max tokens (context length): 32K. Gemma 3n chat template:
<bos><start_of_turn>user\nHello!<end_of_turn>\n<start_of_turn>model\nHey there!<end_of_turn>\n<start_of_turn>user\nWhat is 1+1?<end_of_turn>\n<start_of_turn>model\n
- For complete detailed instructions, see our step-by-step guide.
🦥 Fine-tune Gemma 3n with Unsloth
- Fine-tune Gemma 3n (4B) for free using our Google Colab notebook here!
- Read our Blog about Gemma 3n support: unsloth.ai/blog/gemma-3n
- View the rest of our notebooks in our docs here.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Gemma-3n-E4B | ▶️ Start on Colab | 2x faster | 80% less |
GRPO with Gemma 3 (1B) | ▶️ Start on Colab | 2x faster | 80% less |
Gemma 3 (4B) | ▶️ Start on Colab | 2x faster | 60% less |
Qwen3 (14B) | ▶️ Start on Colab | 2x faster | 60% less |
DeepSeek-R1-0528-Qwen3-8B (14B) | ▶️ Start on Colab | 2x faster | 80% less |
Llama-3.2 (3B) | ▶️ Start on Colab | 2.4x faster | 58% less |