File size: 34,046 Bytes
aeab8be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d690f72
740a096
d690f72
 
 
 
 
 
 
 
aeab8be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
---
license: gemma
base_model: google/gemma-3n-e4b-it
tags:
- gemma3n
- gguf
- quantized
- llama.cpp
- ollama
- inference
- conversational
- multilingual
language:
- en
- multilingual
pipeline_tag: text-generation
model_type: gemma3n
library_name: transformers
---

<p style="margin-bottom: 0;">
    <em>See <a href="https://huggingface.co/muranAI">our collection</a> for all new Models.</em>
</p>

<div style="display: flex; gap: 5px; align-items: center; ">
    <a href="https://muranai.com/">
        <img src="https://muranai.com/images/logo_white.png" width="133">
    </a>
</div>

# Gemma 3n E4B IT - Complete GGUF Collection

This repository contains a comprehensive collection of **Gemma 3n E4B Instruction-Tuned** models quantized to various GGUF formats for efficient inference on different hardware configurations.

## πŸ“Š **Quantization Overview**

| **Format** | **Size (GB)** | **Compression** | **Quality** | **Use Case** |
|------------|---------------|-----------------|-------------|--------------|
| **F16** | 13.0 | 1.0x | 🌟🌟🌟🌟🌟 | Research, maximum quality |
| **Q8_0** | 6.8 | 1.9x | 🌟🌟🌟🌟🌟 | Production, near-original quality |
| **Q6_K** | 5.3 | 2.5x | 🌟🌟🌟🌟 | High-quality inference |
| **Q5_1** | 4.9 | 2.6x | 🌟🌟🌟🌟 | Balanced quality/speed |
| **Q5_K_M** | 4.6 | 2.8x | 🌟🌟🌟🌟 | **Recommended for most users** |
| **Q5_K_S** | 4.5 | 2.8x | 🌟🌟🌟🌟 | Slightly faster Q5 |
| **Q5_0** | 4.5 | 2.8x | 🌟🌟🌟🌟 | Good balance |
| **Q4_1** | 4.2 | 3.1x | 🌟🌟🌟 | Better 4-bit quality |
| **Q4_K_M** | 3.9 | 3.3x | 🌟🌟🌟 | **Popular choice** |
| **Q4_K_S** | 3.8 | 3.4x | 🌟🌟🌟 | Efficient 4-bit |
| **Q4_0** | 3.8 | 3.4x | 🌟🌟🌟 | **Most compatible** |
| **Q3_K_L** | 3.4 | 3.8x | 🌟🌟 | High compression |
| **Q3_K_M** | 3.2 | 4.0x | 🌟🌟 | Aggressive compression |
| **Q3_K_S** | 3.0 | 4.3x | 🌟🌟 | Maximum speed |
| **Q2_K** | 2.6 | 5.0x | 🌟 | Extreme compression |

## πŸš€ **Quick Start**

### Using with Ollama
```bash
# Option 1: Use pre-built model (requires Ollama β‰₯ 0.10.0)
ollama run gemma3n:e4b

# Option 2: Import custom quantization
ollama create my-gemma3n -f Modelfile
```

### Using with llama.cpp
```bash
# Download your preferred quantization
./llama-server -m gemma-3n-e4b-it-q4_k_m.gguf -c 4096

# For chat interface
./llama-chat -m gemma-3n-e4b-it-q4_k_m.gguf --color -i
```

### Using with Python (llama-cpp-python)
```python
from llama_cpp import Llama

# Load model
llm = Llama(
    model_path="gemma-3n-e4b-it-q4_k_m.gguf",
    n_ctx=4096,
    n_threads=8,
    verbose=False
)

# Generate response
response = llm.create_chat_completion(
    messages=[
        {"role": "user", "content": "Hello! Can you introduce yourself?"}
    ],
    max_tokens=512,
    temperature=0.7
)

print(response['choices'][0]['message']['content'])
```

## πŸ’¬ **Chat Template**

Gemma 3n uses the following chat template:
```
<bos><start_of_turn>user
{user_message}<end_of_turn>
<start_of_turn>model
{assistant_response}<end_of_turn>
```

### Recommended Parameters
```yaml
temperature: 0.7
top_p: 0.9
top_k: 40
repeat_penalty: 1.1
max_tokens: 4096
stop_tokens: ["<end_of_turn>"]
```

## πŸ“‹ **Model Details**

- **Base Model**: [google/gemma-3n-e4b-it](https://huggingface.co/google/gemma-3n-e4b-it)
- **Architecture**: Gemma 3n (Effective 4B parameters)
- **Context Length**: 32,768 tokens
- **Vocabulary Size**: 256,000 tokens
- **Training Data**: Multilingual (140+ languages)
- **License**: Gemma License

### Key Features
- βœ… **Efficient Architecture**: Uses selective parameter activation
- βœ… **Multilingual Support**: Trained on 140+ spoken languages
- βœ… **Instruction-Tuned**: Optimized for conversational AI
- βœ… **Extended Context**: 32K token context window
- βœ… **Hardware Optimized**: Designed for everyday devices

## πŸ”§ **Quantization Details**

All models were quantized using the latest **llama.cpp** with full Gemma 3n architecture support:

### Quantization Process
```bash
# Example quantization command
./llama-quantize input.gguf output.gguf q4_k_m
```

### Architecture-Specific Tensors
The quantization properly handles Gemma 3n-specific components:
- `altup_correct_coef`, `altup_correct_scale`, `altup_predict_coef`
- `altup_router`, `altup_router_norm`
- `laurel_l`, `laurel_r`, `laurel_post_norm`
- Standard attention, FFN, and normalization layers

## πŸ“ˆ **Performance Benchmarks**

Based on the original Gemma 3n E4B IT model:

| **Benchmark** | **Score** | **Category** |
|---------------|-----------|--------------|
| MMLU | 64.9% | General Knowledge |
| HumanEval | 75.0% | Code Generation |
| HellaSwag | 78.6% | Commonsense Reasoning |
| ARC-E | 81.6% | Elementary Science |
| TriviaQA | 70.2% | Factual Knowledge |
| MBPP | 63.6% | Programming |

## 🎯 **Recommended Usage**

### For Different Hardware:
- **High-end GPU (24GB+ VRAM)**: Q8_0 or F16
- **Mid-range GPU (12-16GB VRAM)**: Q6_K or Q5_K_M
- **Consumer GPU (8-12GB VRAM)**: Q4_K_M or Q5_K_S
- **CPU-only or low VRAM**: Q4_0 or Q3_K_M
- **Mobile/Edge devices**: Q3_K_S or Q2_K

### For Different Use Cases:
- **Production chatbots**: Q5_K_M or Q4_K_M
- **Research/evaluation**: Q8_0 or F16
- **Rapid prototyping**: Q4_0
- **Resource-constrained**: Q3_K_M
- **Edge deployment**: Q2_K

## πŸ› οΈ **Technical Requirements**

### Minimum System Requirements:
- **RAM**: 4GB (for Q2_K) to 16GB (for F16)
- **Storage**: 3GB to 13GB depending on quantization
- **CPU**: Modern x64 processor with AVX support
- **GPU** (optional): CUDA, OpenCL, or Metal support

### Software Dependencies:
- **llama.cpp**: Latest version with Gemma 3n support
- **Python**: 3.8+ for llama-cpp-python
- **Ollama**: 0.10.0+ for official support

## πŸ“š **Usage Examples**

### Example Conversations:
```
User: What is Gemma 3n and how does it differ from other models?

Gemma 3n: I'm Gemma 3n, a helpful AI assistant created by Google. I'm part of the Gemma 3n family, which is designed for efficient execution on everyday devices like laptops, tablets, and phones. 

What makes me different from other models:
- **Selective Parameter Activation**: I use an innovative technology that allows me to operate effectively with 4B parameters while containing more total parameters
- **Multilingual Capabilities**: I was trained on data in over 140 spoken languages
- **Hardware Efficiency**: I'm optimized to run well on consumer hardware, not just high-end servers
- **Extended Context**: I can handle up to 32K tokens in context
- **Instruction-Tuned**: I'm specifically fine-tuned for conversational interactions

User: Can you help me write a Python function to calculate fibonacci numbers?

Gemma 3n: Certainly! Here are a few different approaches to calculate Fibonacci numbers:

def fibonacci_recursive(n):
    """Recursive approach - simple but inefficient for large n"""
    if n <= 1:
        return n
    return fibonacci_recursive(n-1) + fibonacci_recursive(n-2)

def fibonacci_iterative(n):
    """Iterative approach - efficient and readable"""
    if n <= 1:
        return n
    
    a, b = 0, 1
    for _ in range(2, n + 1):
        a, b = b, a + b
    return b

def fibonacci_generator(limit):
    """Generator for Fibonacci sequence up to limit"""
    a, b = 0, 1
    while a < limit:
        yield a
        a, b = b, a + b

# Examples:
print(fibonacci_iterative(10))  # Output: 55
print(list(fibonacci_generator(100)))  # All Fib numbers < 100
```

## πŸ”§ **Troubleshooting**

### Common Issues:

**1. "unknown model architecture" error:**
- Ensure you're using the latest llama.cpp with Gemma 3n support
- Update to llama.cpp commit `8846aace` or later

**2. Ollama compatibility issues:**
- Update Ollama to version 0.10.0 or later for native Gemma 3n support
- For older Ollama versions, use custom Modelfile approach

**3. Out of memory errors:**
- Try a smaller quantization (Q4_0, Q3_K_M, or Q2_K)
- Reduce context length with `-c` parameter
- Use CPU inference instead of GPU

**4. Slow inference:**
- Use GPU acceleration if available
- Try quantizations like Q4_K_M or Q4_0 for better speed
- Adjust thread count with `-t` parameter

### Performance Tips:
- **GPU users**: Q4_K_M or Q5_K_M offer the best speed/quality balance
- **CPU users**: Q4_0 provides good compatibility and reasonable speed
- **Memory-constrained**: Q3_K_M or Q2_K can run on limited hardware

## ⚠️ **Limitations**

- **Quantization Quality**: Lower bit quantizations (Q2_K, Q3_K) may have reduced quality
- **Context Length**: While supporting 32K tokens, performance may degrade with very long contexts
- **Language Coverage**: While multilingual, performance may vary across different languages
- **Knowledge Cutoff**: Training data has a cutoff date (June 2024)
- **Factual Accuracy**: May generate plausible but incorrect information
- **Code Generation**: While capable, may require verification for production use

## πŸ”’ **Ethical Considerations**

- **Bias**: Model may reflect biases present in training data
- **Safety**: Content filtering is not included in these GGUF versions
- **Verification**: Always verify outputs, especially for critical applications
- **Responsible Use**: Follow the Gemma License and ethical AI guidelines

## πŸ“„ **Files Included**

```
gemma-3n-e4b-it-f16.gguf      # 13.0 GB - Full precision
gemma-3n-e4b-it-q8_0.gguf     # 6.8 GB  - 8-bit quantization
gemma-3n-e4b-it-q6_k.gguf     # 5.3 GB  - 6-bit K-quant
gemma-3n-e4b-it-q5_1.gguf     # 4.9 GB  - 5-bit (high quality)
gemma-3n-e4b-it-q5_k_m.gguf   # 4.6 GB  - 5-bit K-quant medium
gemma-3n-e4b-it-q5_k_s.gguf   # 4.5 GB  - 5-bit K-quant small
gemma-3n-e4b-it-q5_0.gguf     # 4.5 GB  - 5-bit standard
gemma-3n-e4b-it-q4_1.gguf     # 4.2 GB  - 4-bit (improved)
gemma-3n-e4b-it-q4_k_m.gguf   # 3.9 GB  - 4-bit K-quant medium
gemma-3n-e4b-it-q4_k_s.gguf   # 3.8 GB  - 4-bit K-quant small
gemma-3n-e4b-it-q4_0.gguf     # 3.8 GB  - 4-bit standard
gemma-3n-e4b-it-q3_k_l.gguf   # 3.4 GB  - 3-bit K-quant large
gemma-3n-e4b-it-q3_k_m.gguf   # 3.2 GB  - 3-bit K-quant medium
gemma-3n-e4b-it-q3_k_s.gguf   # 3.0 GB  - 3-bit K-quant small
gemma-3n-e4b-it-q2_k.gguf     # 2.6 GB  - 2-bit K-quant
```

## πŸ™ **Acknowledgments**

- **Google DeepMind**: For developing and releasing Gemma 3n
- **llama.cpp community**: For implementing Gemma 3n architecture support
- **Hugging Face**: For providing the model hosting platform
- **Quantization**: Performed using the latest llama.cpp tools

## πŸ“ž **Support & Community**

- **Issues**: Report problems in the repository issues
- **Discussions**: Join the community discussions
- **Updates**: Follow for model updates and improvements

## πŸ“œ **Citation**

If you use these models in your research, please cite:

```bibtex
@article{gemma_3n_2025,
    title={Gemma 3n},
    url={https://ai.google.dev/gemma/docs/gemma-3n},
    publisher={Google DeepMind},
    author={Gemma Team},
    year={2025}
}
```

---

**License**: This model is released under the Gemma License. Please review the license terms before use.

**Disclaimer**: These quantized models are provided for research and educational purposes. Users are responsible for ensuring compliance with applicable laws and ethical guidelines.


# Gemma 3n model card

**Model Page**: [Gemma 3n](https://ai.google.dev/gemma/docs/gemma-3n)

**Resources and Technical Documentation**:

-   [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
-   [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma-3n)
-   [Gemma on HuggingFace](https://huggingface.co/collections/google/gemma-3n-685065323f5984ef315c93f4)
-   [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3n)

**Terms of Use**: [Terms](https://ai.google.dev/gemma/terms)\
**Authors**: Google DeepMind

## Model Information

Summary description and brief definition of inputs and outputs.

### Description

Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3n models are designed for efficient execution on low-resource devices.
They are capable of multimodal input, handling text, image, video, and audio
input, and generating text outputs, with open weights for pre-trained and
instruction-tuned variants. These models were trained with data in over 140
spoken languages.

Gemma 3n models use selective parameter activation technology to reduce resource
requirements. This technique allows the models to operate at an effective size
of 2B and 4B parameters, which is lower than the total number of parameters they
contain. For more information on Gemma 3n's efficient parameter management
technology, see the
[Gemma 3n](https://ai.google.dev/gemma/docs/gemma-3n#parameters)
page.

### Inputs and outputs

-   **Input:**
    -   Text string, such as a question, a prompt, or a document to be
        summarized
    -   Images, normalized to 256x256, 512x512, or 768x768 resolution
        and encoded to 256 tokens each
    -   Audio data encoded to 6.25 tokens per second from a single channel
    -   Total input context of 32K tokens
-   **Output:**
    -   Generated text in response to the input, such as an answer to a
        question, analysis of image content, or a summary of a document
    -   Total output length up to 32K tokens, subtracting the request
        input tokens

### Usage

Below, there are some code snippets on how to get quickly started with running
the model. First, install the Transformers library. Gemma 3n is supported
starting from transformers 4.53.0.

```sh
$ pip install -U transformers
```

Then, copy the snippet from the section that is relevant for your use case.

#### Running with the `pipeline` API

You can initialize the model and processor for inference with `pipeline` as
follows.

```python
from transformers import pipeline
import torch

pipe = pipeline(
    "image-text-to-text",
    model="google/gemma-3n-e4b-it",
    device="cuda",
    torch_dtype=torch.bfloat16,
)
```

With instruction-tuned models, you need to use chat templates to process our
inputs first. Then, you can pass it to the pipeline.

```python
messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    }
]

output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
# Okay, let's take a look!
# Based on the image, the animal on the candy is a **turtle**.
# You can see the shell shape and the head and legs.
```

#### Running the model on a single GPU

```python
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
from PIL import Image
import requests
import torch

model_id = "google/gemma-3n-e4b-it"

model = Gemma3nForConditionalGeneration.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16,).eval()

processor = AutoProcessor.from_pretrained(model_id)

messages = [
    {
        "role": "system",
        "content": [{"type": "text", "text": "You are a helpful assistant."}]
    },
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
            {"type": "text", "text": "Describe this image in detail."}
        ]
    }
]

inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

input_len = inputs["input_ids"].shape[-1]

with torch.inference_mode():
    generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
    generation = generation[0][input_len:]

decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)

# **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
# focusing on a cluster of pink cosmos flowers and a busy bumblebee.
# It has a slightly soft, natural feel, likely captured in daylight.
```

### Citation

```
@article{gemma_3n_2025,
    title={Gemma 3n},
    url={https://ai.google.dev/gemma/docs/gemma-3n},
    publisher={Google DeepMind},
    author={Gemma Team},
    year={2025}
}
```

## Model Data

Data used for model training and how the data was processed.

### Training Dataset

These models were trained on a dataset that includes a wide variety of sources
totalling approximately 11 trillion tokens. The knowledge cutoff date for the
training data was June 2024. Here are the key components:

-   **Web Documents**: A diverse collection of web text ensures the model
    is exposed to a broad range of linguistic styles, topics, and vocabulary.
    The training dataset includes content in over 140 languages.
-   **Code**: Exposing the model to code helps it to learn the syntax and
    patterns of programming languages, which improves its ability to generate
    code and understand code-related questions.
-   **Mathematics**: Training on mathematical text helps the model learn
    logical reasoning, symbolic representation, and to address mathematical queries.
-   **Images**: A wide range of images enables the model to perform image
    analysis and visual data extraction tasks.
-   Audio: A diverse set of sound samples enables the model to recognize
    speech, transcribe text from recordings, and identify information in audio data.

The combination of these diverse data sources is crucial for training a
powerful multimodal model that can handle a wide variety of different tasks and
data formats.

### Data Preprocessing

Here are the key data cleaning and filtering methods applied to the training
data:

-   **CSAM Filtering**: Rigorous CSAM (Child Sexual Abuse Material)
    filtering was applied at multiple stages in the data preparation process to
    ensure the exclusion of harmful and illegal content.
-   **Sensitive Data Filtering**: As part of making Gemma pre-trained models
    safe and reliable, automated techniques were used to filter out certain
    personal information and other sensitive data from training sets.
-   **Additional methods**: Filtering based on content quality and safety in
    line with
    [our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf).

## Implementation Information

Details about the model internals.

### Hardware

Gemma was trained using [Tensor Processing Unit
(TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv4p, TPUv5p
and TPUv5e). Training generative models requires significant computational
power. TPUs, designed specifically for matrix operations common in machine
learning, offer several advantages in this domain:

-   **Performance**: TPUs are specifically designed to handle the massive
    computations involved in training generative models. They can speed up
    training considerably compared to CPUs.
-   **Memory**: TPUs often come with large amounts of high-bandwidth memory,
    allowing for the handling of large models and batch sizes during training.
    This can lead to better model quality.
-   **Scalability**: TPU Pods (large clusters of TPUs) provide a scalable
    solution for handling the growing complexity of large foundation models.
    You can distribute training across multiple TPU devices for faster and more
    efficient processing.
-   **Cost-effectiveness**: In many scenarios, TPUs can provide a more
    cost-effective solution for training large models compared to CPU-based
    infrastructure, especially when considering the time and resources saved
    due to faster training.

These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).

### Software

Training was done using [JAX](https://github.com/jax-ml/jax) and
[ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.

Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://goo.gle/gemma2report):
*"the 'single controller' programming model of Jax and Pathways allows a single
Python process to orchestrate the entire training run, dramatically simplifying
the development workflow."*

## Evaluation

Model evaluation metrics and results.

### Benchmark Results

These models were evaluated at full precision (float32) against a large
collection of different datasets and metrics to cover different aspects of
content generation. Evaluation results marked with **IT** are for
instruction-tuned models. Evaluation results marked with **PT** are for
pre-trained models.

#### Reasoning and factuality

| Benchmark                      | Metric         | n-shot   |  E2B PT  |  E4B PT  |
| ------------------------------ |----------------|----------|:--------:|:--------:|
| [HellaSwag][hellaswag]         | Accuracy       | 10-shot  |   72.2   |   78.6   |
| [BoolQ][boolq]                 | Accuracy       | 0-shot   |   76.4   |   81.6   |
| [PIQA][piqa]                   | Accuracy       | 0-shot   |   78.9   |   81.0   |
| [SocialIQA][socialiqa]         | Accuracy       | 0-shot   |   48.8   |   50.0   |
| [TriviaQA][triviaqa]           | Accuracy       | 5-shot   |   60.8   |   70.2   |
| [Natural Questions][naturalq]  | Accuracy       | 5-shot   |   15.5   |   20.9   |
| [ARC-c][arc]                   | Accuracy       | 25-shot  |   51.7   |   61.6   |
| [ARC-e][arc]                   | Accuracy       | 0-shot   |   75.8   |   81.6   |
| [WinoGrande][winogrande]       | Accuracy       | 5-shot   |   66.8   |   71.7   |
| [BIG-Bench Hard][bbh]          | Accuracy       | few-shot |   44.3   |   52.9   |
| [DROP][drop]                   | Token F1 score | 1-shot   |   53.9   |   60.8   |

[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161

#### Multilingual

| Benchmark                           | Metric                  | n-shot   |  E2B IT  |  E4B IT  |
| ------------------------------------|-------------------------|----------|:--------:|:--------:|
| [MGSM][mgsm]                        | Accuracy                |  0-shot  |   53.1   |   60.7   |
| [WMT24++][wmt24pp] (ChrF)           | Character-level F-score |  0-shot  |   42.7   |   50.1   |
| [Include][include]                  | Accuracy                |  0-shot  |   38.6   |   57.2   |
| [MMLU][mmlu] (ProX)                 | Accuracy                |  0-shot  |    8.1   |   19.9   |
| [OpenAI MMLU][openai-mmlu]          | Accuracy                |  0-shot  |   22.3   |   35.6   |
| [Global-MMLU][global-mmlu]          | Accuracy                |  0-shot  |   55.1   |   60.3   |
| [ECLeKTic][eclektic]                | ECLeKTic score          |  0-shot  |    2.5   |    1.9   |

[mgsm]: https://arxiv.org/abs/2210.03057
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[include]:https://arxiv.org/abs/2411.19799
[mmlu]: https://arxiv.org/abs/2009.03300
[openai-mmlu]: https://huggingface.co/datasets/openai/MMMLU
[global-mmlu]: https://huggingface.co/datasets/CohereLabs/Global-MMLU
[eclektic]: https://arxiv.org/abs/2502.21228

#### STEM and code

| Benchmark                           | Metric                   | n-shot   |  E2B IT  |  E4B IT  |
| ------------------------------------|--------------------------|----------|:--------:|:--------:|
| [GPQA][gpqa] Diamond                | RelaxedAccuracy/accuracy |  0-shot  |   24.8   |   23.7   |
| [LiveCodeBench][lcb] v5             | pass@1                   |  0-shot  |   18.6   |   25.7   |
| Codegolf v2.2                       | pass@1                   |  0-shot  |   11.0   |   16.8   |
| [AIME 2025][aime-2025]              | Accuracy                 |  0-shot  |    6.7   |   11.6   |

[gpqa]: https://arxiv.org/abs/2311.12022
[lcb]: https://arxiv.org/abs/2403.07974
[aime-2025]: https://www.vals.ai/benchmarks/aime-2025-05-09

#### Additional benchmarks

| Benchmark                            | Metric     | n-shot   |  E2B IT  |  E4B IT  |
| ------------------------------------ |------------|----------|:--------:|:--------:|
| [MMLU][mmlu]                         |  Accuracy  |  0-shot  |   60.1   |   64.9   |
| [MBPP][mbpp]                         |  pass@1    |  3-shot  |   56.6   |   63.6   |
| [HumanEval][humaneval]               |  pass@1    |  0-shot  |   66.5   |   75.0   |
| [LiveCodeBench][lcb]                 |  pass@1    |  0-shot  |   13.2   |   13.2   |
| HiddenMath                           |  Accuracy  |  0-shot  |   27.7   |   37.7   |
| [Global-MMLU-Lite][global-mmlu-lite] |  Accuracy  |  0-shot  |   59.0   |   64.5   |
| [MMLU][mmlu] (Pro)                   |  Accuracy  |  0-shot  |   40.5   |   50.6   |

[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
[lcb]: https://arxiv.org/abs/2403.07974
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite

## Ethics and Safety

Ethics and safety evaluation approach and results.

### Evaluation Approach

Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:

-   **Child Safety**: Evaluation of text-to-text and image to text prompts
    covering child safety policies, including child sexual abuse and
    exploitation.
-   **Content Safety:** Evaluation of text-to-text and image to text prompts
    covering safety policies including, harassment, violence and gore, and hate
    speech.
-   **Representational Harms**: Evaluation of text-to-text and image to text
    prompts covering safety policies including bias, stereotyping, and harmful
    associations or inaccuracies.

In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making. Notable
assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.

### Evaluation Results

For all areas of safety testing, we saw safe levels of performance across the
categories of child safety, content safety, and representational harms relative
to previous Gemma models. All testing was conducted without safety filters to
evaluate the model capabilities and behaviors. For text-to-text,  image-to-text,
and audio-to-text, and across all model sizes, the model produced minimal policy
violations, and showed significant improvements over previous Gemma models'
performance with respect to high severity violations. A limitation of our
evaluations was they included primarily English language prompts.

## Usage and Limitations

These models have certain limitations that users should be aware of.

### Intended Usage

Open generative models have a wide range of applications across various
industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.

-   Content Creation and Communication
    -   **Text Generation**: Generate creative text formats such as
        poems, scripts, code, marketing copy, and email drafts.
    -   **Chatbots and Conversational AI**: Power conversational
        interfaces for customer service, virtual assistants, or interactive
        applications.
    -   **Text Summarization**: Generate concise summaries of a text
        corpus, research papers, or reports.
    -   **Image Data Extraction**: Extract, interpret, and summarize
        visual data for text communications.
    -   **Audio Data Extraction**: Transcribe spoken language, translate speech
        to text in other languages, and analyze sound-based data.
-   Research and Education
    -   **Natural Language Processing (NLP) and generative model
        Research**: These models can serve as a foundation for researchers to
        experiment with generative models and NLP techniques, develop
        algorithms, and contribute to the advancement of the field.
    -   **Language Learning Tools**: Support interactive language
        learning experiences, aiding in grammar correction or providing writing
        practice.
    -   **Knowledge Exploration**: Assist researchers in exploring large
        bodies of data by generating summaries or answering questions about
        specific topics.

### Limitations

-   Training Data
    -   The quality and diversity of the training data significantly
        influence the model's capabilities. Biases or gaps in the training data
        can lead to limitations in the model's responses.
    -   The scope of the training dataset determines the subject areas
        the model can handle effectively.
-   Context and Task Complexity
    -   Models are better at tasks that can be framed with clear
        prompts and instructions. Open-ended or highly complex tasks might be
        challenging.
    -   A model's performance can be influenced by the amount of context
        provided (longer context generally leads to better outputs, up to a
        certain point).
-   Language Ambiguity and Nuance
    -   Natural language is inherently complex. Models might struggle
        to grasp subtle nuances, sarcasm, or figurative language.
-   Factual Accuracy
    -   Models generate responses based on information they learned
        from their training datasets, but they are not knowledge bases. They
        may generate incorrect or outdated factual statements.
-   Common Sense
    -   Models rely on statistical patterns in language. They might
        lack the ability to apply common sense reasoning in certain situations.

### Ethical Considerations and Risks

The development of generative models raises several ethical concerns. In
creating an open model, we have carefully considered the following:

-   Bias and Fairness
    -   Generative models trained on large-scale, real-world text and image data
        can reflect socio-cultural biases embedded in the training material.
        These models underwent careful scrutiny, input data pre-processing
        described and posterior evaluations reported in this card.
-   Misinformation and Misuse
    -   Generative models can be misused to generate text that is
        false, misleading, or harmful.
    -   Guidelines are provided for responsible use with the model, see the
        [Responsible Generative AI Toolkit](https://ai.google.dev/responsible).
-   Transparency and Accountability:
    -   This model card summarizes details on the models' architecture,
        capabilities, limitations, and evaluation processes.
    -   A responsibly developed open model offers the opportunity to
        share innovation by making generative model technology accessible to
        developers and researchers across the AI ecosystem.

Risks identified and mitigations:

-   **Perpetuation of biases**: It's encouraged to perform continuous monitoring
    (using evaluation metrics, human review) and the exploration of de-biasing
    techniques during model training, fine-tuning, and other use cases.
-   **Generation of harmful content**: Mechanisms and guidelines for content
    safety are essential. Developers are encouraged to exercise caution and
    implement appropriate content safety safeguards based on their specific
    product policies and application use cases.
-   **Misuse for malicious purposes**: Technical limitations and developer
    and end-user education can help mitigate against malicious applications of
    generative models. Educational resources and reporting mechanisms for users
    to flag misuse are provided. Prohibited uses of Gemma models are outlined
    in the
    [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
-   **Privacy violations**: Models were trained on data filtered for removal of
    certain personal information and other sensitive data. Developers are
    encouraged to adhere to privacy regulations with privacy-preserving
    techniques.

### Benefits

At the time of release, this family of models provides high-performance open
generative model implementations designed from the ground up for responsible AI
development compared to similarly sized models.

Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.