shimmyshimmer commited on
Commit
753cd29
·
verified ·
1 Parent(s): b86d9a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -3
README.md CHANGED
@@ -1,3 +1,64 @@
1
- ---
2
- license: gemma
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3n-E4B-it
3
+ language:
4
+ - en
5
+ pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
+ license: gemma
8
+ tags:
9
+ - gemma3
10
+ - unsloth
11
+ - transformers
12
+ - gemma
13
+ - google
14
+ ---
15
+ <div>
16
+ <p style="margin-bottom: 0; margin-top: 0;">
17
+ <strong>Learn how to run & fine-tune Gemma 3n correctly - <a href="https://docs.unsloth.ai/basics/gemma-3n">Read our Guide</a>.</strong>
18
+ </p>
19
+ <p style="margin-bottom: 0;">
20
+ <em>See <a href="https://huggingface.co/collections/unsloth/gemma-3n-685d3874830e49e1c93f9339">our collection</a> for all versions of Gemma 3n including GGUF, 4-bit & 16-bit formats.</em>
21
+ </p>
22
+ <p style="margin-top: 0;margin-bottom: 0;">
23
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves SOTA accuracy & performance versus other quants.</em>
24
+ </p>
25
+ <div style="display: flex; gap: 5px; align-items: center; ">
26
+ <a href="https://github.com/unslothai/unsloth/">
27
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
28
+ </a>
29
+ <a href="https://discord.gg/unsloth">
30
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
31
+ </a>
32
+ <a href="https://docs.unsloth.ai/basics/gemma-3n">
33
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
34
+ </a>
35
+ </div>
36
+ <h1 style="margin-top:0; margin-bottom: 0;">✨ Gemma 3n Usage Guidelines</h1>
37
+ </div>
38
+
39
+ - Currently **only text** is supported.
40
+ - Ollama: `ollama run hf.co/unsloth/gemma-3n-E4B-it:Q4_K_XL` - auto-sets correct chat template and settings
41
+ - Set temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
42
+ - Gemma 3n max tokens (context length): 32K. Gemma 3n chat template:
43
+ ```
44
+ <bos><start_of_turn>user\nHello!<end_of_turn>\n<start_of_turn>model\nHey there!<end_of_turn>\n<start_of_turn>user\nWhat is 1+1?<end_of_turn>\n<start_of_turn>model\n
45
+ ```
46
+ - For complete detailed instructions, see our [step-by-step guide](https://docs.unsloth.ai/basics/gemma-3n).
47
+
48
+ # 🦥 Fine-tune Gemma 3n with Unsloth
49
+
50
+ - Fine-tune Gemma 3n (4B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
51
+ - Read our Blog about Gemma 3n support: [unsloth.ai/blog/gemma-3n](https://unsloth.ai/blog/gemma-3n)
52
+ - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
53
+
54
+ | Unsloth supports | Free Notebooks | Performance | Memory use |
55
+ |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
56
+ | **Gemma-3n-E4B** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 2x faster | 80% less |
57
+ | **GRPO with Gemma 3 (1B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(1B)-GRPO.ipynb) | 2x faster | 80% less |
58
+ | **Gemma 3 (4B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(4B).ipynb) | 2x faster | 60% less |
59
+ | **Qwen3 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(14B)-Reasoning-Conversational.ipynb) | 2x faster | 60% less |
60
+ | **DeepSeek-R1-0528-Qwen3-8B (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/DeepSeek_R1_0528_Qwen3_(8B)_GRPO.ipynb) | 2x faster | 80% less |
61
+ | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
62
+ <br>
63
+
64
+ # Gemma-3n-E4B model card