Create README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,856 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: gemma
|
3 |
+
library_name: transformers
|
4 |
+
pipeline_tag: image-text-to-text
|
5 |
+
extra_gated_heading: Access Gemma on Hugging Face
|
6 |
+
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
|
7 |
+
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
8 |
+
Face and click below. Requests are processed immediately.
|
9 |
+
extra_gated_button_content: Acknowledge license
|
10 |
+
base_model: google/gemma-3-27b-pt
|
11 |
+
---
|
12 |
+
|
13 |
+
# <span style="color: #7FFF7F;">gemma-3-27b-it GGUF Models</span>
|
14 |
+
|
15 |
+
## **Choosing the Right Model Format**
|
16 |
+
|
17 |
+
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
|
18 |
+
|
19 |
+
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
|
20 |
+
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
|
21 |
+
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
|
22 |
+
- Recommended if your hardware supports **BF16 acceleration** (check your device’s specs).
|
23 |
+
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
|
24 |
+
|
25 |
+
📌 **Use BF16 if:**
|
26 |
+
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
|
27 |
+
✔ You want **higher precision** while saving memory.
|
28 |
+
✔ You plan to **requantize** the model into another format.
|
29 |
+
|
30 |
+
📌 **Avoid BF16 if:**
|
31 |
+
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
|
32 |
+
❌ You need compatibility with older devices that lack BF16 optimization.
|
33 |
+
|
34 |
+
---
|
35 |
+
|
36 |
+
### **F16 (Float 16) – More widely supported than BF16**
|
37 |
+
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
|
38 |
+
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
|
39 |
+
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
|
40 |
+
|
41 |
+
📌 **Use F16 if:**
|
42 |
+
✔ Your hardware supports **FP16** but **not BF16**.
|
43 |
+
✔ You need a **balance between speed, memory usage, and accuracy**.
|
44 |
+
✔ You are running on a **GPU** or another device optimized for FP16 computations.
|
45 |
+
|
46 |
+
📌 **Avoid F16 if:**
|
47 |
+
❌ Your device lacks **native FP16 support** (it may run slower than expected).
|
48 |
+
❌ You have memory limitations.
|
49 |
+
|
50 |
+
---
|
51 |
+
|
52 |
+
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
|
53 |
+
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
|
54 |
+
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
|
55 |
+
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
|
56 |
+
|
57 |
+
📌 **Use Quantized Models if:**
|
58 |
+
✔ You are running inference on a **CPU** and need an optimized model.
|
59 |
+
✔ Your device has **low VRAM** and cannot load full-precision models.
|
60 |
+
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
|
61 |
+
|
62 |
+
📌 **Avoid Quantized Models if:**
|
63 |
+
❌ You need **maximum accuracy** (full-precision models are better for this).
|
64 |
+
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
|
65 |
+
|
66 |
+
---
|
67 |
+
|
68 |
+
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
|
69 |
+
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
|
70 |
+
|
71 |
+
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
|
72 |
+
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
|
73 |
+
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
|
74 |
+
|
75 |
+
- **IQ3_S**: Small block size for **maximum memory efficiency**.
|
76 |
+
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
|
77 |
+
|
78 |
+
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
|
79 |
+
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
|
80 |
+
|
81 |
+
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
|
82 |
+
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
|
83 |
+
|
84 |
+
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
|
85 |
+
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
|
86 |
+
|
87 |
+
---
|
88 |
+
|
89 |
+
### **Summary Table: Model Format Selection**
|
90 |
+
|
91 |
+
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|
92 |
+
|--------------|------------|---------------|----------------------|---------------|
|
93 |
+
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
|
94 |
+
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn’t available |
|
95 |
+
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
|
96 |
+
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
|
97 |
+
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
|
98 |
+
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
|
99 |
+
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
|
100 |
+
|
101 |
+
---
|
102 |
+
|
103 |
+
## **Included Files & Details**
|
104 |
+
|
105 |
+
### `gemma-3-27b-it-bf16.gguf`
|
106 |
+
- Model weights preserved in **BF16**.
|
107 |
+
- Use this if you want to **requantize** the model into a different format.
|
108 |
+
- Best if your device supports **BF16 acceleration**.
|
109 |
+
|
110 |
+
### `gemma-3-27b-it-f16.gguf`
|
111 |
+
- Model weights stored in **F16**.
|
112 |
+
- Use if your device supports **FP16**, especially if BF16 is not available.
|
113 |
+
|
114 |
+
### `gemma-3-27b-it-bf16-q8_0.gguf`
|
115 |
+
- **Output & embeddings** remain in **BF16**.
|
116 |
+
- All other layers quantized to **Q8_0**.
|
117 |
+
- Use if your device supports **BF16** and you want a quantized version.
|
118 |
+
|
119 |
+
### `gemma-3-27b-it-f16-q8_0.gguf`
|
120 |
+
- **Output & embeddings** remain in **F16**.
|
121 |
+
- All other layers quantized to **Q8_0**.
|
122 |
+
|
123 |
+
### `gemma-3-27b-it-q4_k.gguf`
|
124 |
+
- **Output & embeddings** quantized to **Q8_0**.
|
125 |
+
- All other layers quantized to **Q4_K**.
|
126 |
+
- Good for **CPU inference** with limited memory.
|
127 |
+
|
128 |
+
### `gemma-3-27b-it-q4_k_s.gguf`
|
129 |
+
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
|
130 |
+
- Best for **very low-memory setups**.
|
131 |
+
|
132 |
+
### `gemma-3-27b-it-q6_k.gguf`
|
133 |
+
- **Output & embeddings** quantized to **Q8_0**.
|
134 |
+
- All other layers quantized to **Q6_K** .
|
135 |
+
|
136 |
+
### `gemma-3-27b-it-q8_0.gguf`
|
137 |
+
- Fully **Q8** quantized model for better accuracy.
|
138 |
+
- Requires **more memory** but offers higher precision.
|
139 |
+
|
140 |
+
### `gemma-3-27b-it-iq3_xs.gguf`
|
141 |
+
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
|
142 |
+
- Best for **ultra-low-memory devices**.
|
143 |
+
|
144 |
+
### `gemma-3-27b-it-iq3_m.gguf`
|
145 |
+
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
|
146 |
+
- Suitable for **low-memory devices**.
|
147 |
+
|
148 |
+
### `gemma-3-27b-it-q4_0.gguf`
|
149 |
+
- Pure **Q4_0** quantization, optimized for **ARM devices**.
|
150 |
+
- Best for **low-memory environments**.
|
151 |
+
- Prefer IQ4_NL for better accuracy.
|
152 |
+
|
153 |
+
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
|
154 |
+
|
155 |
+
Please click like ❤ . Also I’d really appreciate it if you could test my Network Monitor Assistant at 👉 [Network Monitor Assitant](https://freenetworkmonitor.click/dashboard).
|
156 |
+
|
157 |
+
💬 Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM.
|
158 |
+
|
159 |
+
### What I'm Testing
|
160 |
+
|
161 |
+
I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".
|
162 |
+
|
163 |
+
🟡 **TestLLM** – Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a time—still working on scaling!). If you're curious, I'd be happy to share how it works! .
|
164 |
+
|
165 |
+
### The other Available AI Assistants
|
166 |
+
|
167 |
+
🟢 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://freenetworkmonitor.click) or [Download](https://freenetworkmonitor.click/download) the Free Network Monitor agent to get more tokens, Alternatively use the TestLLM .
|
168 |
+
|
169 |
+
🔵 **HugLLM** – Runs **open-source Hugging Face models** Fast, Runs small models (≈8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)
|
170 |
+
|
171 |
+
|
172 |
+
|
173 |
+
|
174 |
+
# <span style="color: #7FFF7F;">gemma-3-27b-it GGUF Models</span>
|
175 |
+
|
176 |
+
## **Choosing the Right Model Format**
|
177 |
+
|
178 |
+
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
|
179 |
+
|
180 |
+
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
|
181 |
+
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
|
182 |
+
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
|
183 |
+
- Recommended if your hardware supports **BF16 acceleration** (check your device’s specs).
|
184 |
+
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
|
185 |
+
|
186 |
+
📌 **Use BF16 if:**
|
187 |
+
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
|
188 |
+
✔ You want **higher precision** while saving memory.
|
189 |
+
✔ You plan to **requantize** the model into another format.
|
190 |
+
|
191 |
+
📌 **Avoid BF16 if:**
|
192 |
+
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
|
193 |
+
❌ You need compatibility with older devices that lack BF16 optimization.
|
194 |
+
|
195 |
+
---
|
196 |
+
|
197 |
+
### **F16 (Float 16) – More widely supported than BF16**
|
198 |
+
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
|
199 |
+
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
|
200 |
+
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
|
201 |
+
|
202 |
+
📌 **Use F16 if:**
|
203 |
+
✔ Your hardware supports **FP16** but **not BF16**.
|
204 |
+
✔ You need a **balance between speed, memory usage, and accuracy**.
|
205 |
+
✔ You are running on a **GPU** or another device optimized for FP16 computations.
|
206 |
+
|
207 |
+
📌 **Avoid F16 if:**
|
208 |
+
❌ Your device lacks **native FP16 support** (it may run slower than expected).
|
209 |
+
❌ You have memory limitations.
|
210 |
+
|
211 |
+
---
|
212 |
+
|
213 |
+
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
|
214 |
+
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
|
215 |
+
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
|
216 |
+
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
|
217 |
+
|
218 |
+
📌 **Use Quantized Models if:**
|
219 |
+
✔ You are running inference on a **CPU** and need an optimized model.
|
220 |
+
✔ Your device has **low VRAM** and cannot load full-precision models.
|
221 |
+
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
|
222 |
+
|
223 |
+
📌 **Avoid Quantized Models if:**
|
224 |
+
❌ You need **maximum accuracy** (full-precision models are better for this).
|
225 |
+
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
|
226 |
+
|
227 |
+
---
|
228 |
+
|
229 |
+
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
|
230 |
+
These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
|
231 |
+
|
232 |
+
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
|
233 |
+
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
|
234 |
+
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
|
235 |
+
|
236 |
+
- **IQ3_S**: Small block size for **maximum memory efficiency**.
|
237 |
+
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
|
238 |
+
|
239 |
+
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
|
240 |
+
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
|
241 |
+
|
242 |
+
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
|
243 |
+
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
|
244 |
+
|
245 |
+
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
|
246 |
+
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
|
247 |
+
|
248 |
+
---
|
249 |
+
|
250 |
+
### **Summary Table: Model Format Selection**
|
251 |
+
|
252 |
+
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|
253 |
+
|--------------|------------|---------------|----------------------|---------------|
|
254 |
+
| **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
|
255 |
+
| **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn’t available |
|
256 |
+
| **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
|
257 |
+
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
|
258 |
+
| **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
|
259 |
+
| **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
|
260 |
+
| **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
|
261 |
+
|
262 |
+
---
|
263 |
+
|
264 |
+
## **Included Files & Details**
|
265 |
+
|
266 |
+
### `gemma-3-27b-it-bf16.gguf`
|
267 |
+
- Model weights preserved in **BF16**.
|
268 |
+
- Use this if you want to **requantize** the model into a different format.
|
269 |
+
- Best if your device supports **BF16 acceleration**.
|
270 |
+
|
271 |
+
### `gemma-3-27b-it-f16.gguf`
|
272 |
+
- Model weights stored in **F16**.
|
273 |
+
- Use if your device supports **FP16**, especially if BF16 is not available.
|
274 |
+
|
275 |
+
### `gemma-3-27b-it-bf16-q8_0.gguf`
|
276 |
+
- **Output & embeddings** remain in **BF16**.
|
277 |
+
- All other layers quantized to **Q8_0**.
|
278 |
+
- Use if your device supports **BF16** and you want a quantized version.
|
279 |
+
|
280 |
+
### `gemma-3-27b-it-f16-q8_0.gguf`
|
281 |
+
- **Output & embeddings** remain in **F16**.
|
282 |
+
- All other layers quantized to **Q8_0**.
|
283 |
+
|
284 |
+
### `gemma-3-27b-it-q4_k.gguf`
|
285 |
+
- **Output & embeddings** quantized to **Q8_0**.
|
286 |
+
- All other layers quantized to **Q4_K**.
|
287 |
+
- Good for **CPU inference** with limited memory.
|
288 |
+
|
289 |
+
### `gemma-3-27b-it-q4_k_s.gguf`
|
290 |
+
- Smallest **Q4_K** variant, using less memory at the cost of accuracy.
|
291 |
+
- Best for **very low-memory setups**.
|
292 |
+
|
293 |
+
### `gemma-3-27b-it-q6_k.gguf`
|
294 |
+
- **Output & embeddings** quantized to **Q8_0**.
|
295 |
+
- All other layers quantized to **Q6_K** .
|
296 |
+
|
297 |
+
### `gemma-3-27b-it-q8_0.gguf`
|
298 |
+
- Fully **Q8** quantized model for better accuracy.
|
299 |
+
- Requires **more memory** but offers higher precision.
|
300 |
+
|
301 |
+
### `gemma-3-27b-it-iq3_xs.gguf`
|
302 |
+
- **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
|
303 |
+
- Best for **ultra-low-memory devices**.
|
304 |
+
|
305 |
+
### `gemma-3-27b-it-iq3_m.gguf`
|
306 |
+
- **IQ3_M** quantization, offering a **medium block size** for better accuracy.
|
307 |
+
- Suitable for **low-memory devices**.
|
308 |
+
|
309 |
+
### `gemma-3-27b-it-q4_0.gguf`
|
310 |
+
- Pure **Q4_0** quantization, optimized for **ARM devices**.
|
311 |
+
- Best for **low-memory environments**.
|
312 |
+
- Prefer IQ4_NL for better accuracy.
|
313 |
+
|
314 |
+
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
|
315 |
+
|
316 |
+
Please click like ❤ . Also I’d really appreciate it if you could test my Network Monitor Assistant at 👉 [Network Monitor Assitant](https://freenetworkmonitor.click/dashboard).
|
317 |
+
|
318 |
+
💬 Click the **chat icon** (bottom right of the main and dashboard pages) . Choose a LLM; toggle between the LLM Types TurboLLM -> FreeLLM -> TestLLM.
|
319 |
+
|
320 |
+
### What I'm Testing
|
321 |
+
|
322 |
+
I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".
|
323 |
+
|
324 |
+
🟡 **TestLLM** – Runs the current testing model using llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a time—still working on scaling!). If you're curious, I'd be happy to share how it works! .
|
325 |
+
|
326 |
+
### The other Available AI Assistants
|
327 |
+
|
328 |
+
🟢 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://freenetworkmonitor.click) or [Download](https://freenetworkmonitor.click/download) the Free Network Monitor agent to get more tokens, Alternatively use the TestLLM .
|
329 |
+
|
330 |
+
🔵 **HugLLM** – Runs **open-source Hugging Face models** Fast, Runs small models (≈8B) hence lower quality, Get 2x more tokens (subject to Hugging Face API availability)
|
331 |
+
|
332 |
+
|
333 |
+
|
334 |
+
|
335 |
+
# Gemma 3 model card
|
336 |
+
|
337 |
+
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
|
338 |
+
|
339 |
+
**Resources and Technical Documentation**:
|
340 |
+
|
341 |
+
* [Gemma 3 Technical Report][g3-tech-report]
|
342 |
+
* [Responsible Generative AI Toolkit][rai-toolkit]
|
343 |
+
* [Gemma on Kaggle][kaggle-gemma]
|
344 |
+
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
|
345 |
+
|
346 |
+
**Terms of Use**: [Terms][terms]
|
347 |
+
|
348 |
+
**Authors**: Google DeepMind
|
349 |
+
|
350 |
+
## Model Information
|
351 |
+
|
352 |
+
Summary description and brief definition of inputs and outputs.
|
353 |
+
|
354 |
+
### Description
|
355 |
+
|
356 |
+
Gemma is a family of lightweight, state-of-the-art open models from Google,
|
357 |
+
built from the same research and technology used to create the Gemini models.
|
358 |
+
Gemma 3 models are multimodal, handling text and image input and generating text
|
359 |
+
output, with open weights for both pre-trained variants and instruction-tuned
|
360 |
+
variants. Gemma 3 has a large, 128K context window, multilingual support in over
|
361 |
+
140 languages, and is available in more sizes than previous versions. Gemma 3
|
362 |
+
models are well-suited for a variety of text generation and image understanding
|
363 |
+
tasks, including question answering, summarization, and reasoning. Their
|
364 |
+
relatively small size makes it possible to deploy them in environments with
|
365 |
+
limited resources such as laptops, desktops or your own cloud infrastructure,
|
366 |
+
democratizing access to state of the art AI models and helping foster innovation
|
367 |
+
for everyone.
|
368 |
+
|
369 |
+
### Inputs and outputs
|
370 |
+
|
371 |
+
- **Input:**
|
372 |
+
- Text string, such as a question, a prompt, or a document to be summarized
|
373 |
+
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
|
374 |
+
each
|
375 |
+
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
|
376 |
+
32K tokens for the 1B size
|
377 |
+
|
378 |
+
- **Output:**
|
379 |
+
- Generated text in response to the input, such as an answer to a
|
380 |
+
question, analysis of image content, or a summary of a document
|
381 |
+
- Total output context of 8192 tokens
|
382 |
+
|
383 |
+
### Usage
|
384 |
+
|
385 |
+
Below there are some code snippets on how to get quickly started with running the model. First, install the Transformers library. Gemma 3 is supported starting from transformers 4.50.0.
|
386 |
+
|
387 |
+
```sh
|
388 |
+
$ pip install -U transformers
|
389 |
+
```
|
390 |
+
|
391 |
+
Then, copy the snippet from the section that is relevant for your use case.
|
392 |
+
|
393 |
+
#### Running with the `pipeline` API
|
394 |
+
|
395 |
+
You can initialize the model and processor for inference with `pipeline` as follows.
|
396 |
+
|
397 |
+
```python
|
398 |
+
from transformers import pipeline
|
399 |
+
import torch
|
400 |
+
|
401 |
+
pipe = pipeline(
|
402 |
+
"image-text-to-text",
|
403 |
+
model="google/gemma-3-27b-it",
|
404 |
+
device="cuda",
|
405 |
+
torch_dtype=torch.bfloat16
|
406 |
+
)
|
407 |
+
```
|
408 |
+
|
409 |
+
With instruction-tuned models, you need to use chat templates to process our inputs first. Then, you can pass it to the pipeline.
|
410 |
+
|
411 |
+
```python
|
412 |
+
messages = [
|
413 |
+
{
|
414 |
+
"role": "system",
|
415 |
+
"content": [{"type": "text", "text": "You are a helpful assistant."}]
|
416 |
+
},
|
417 |
+
{
|
418 |
+
"role": "user",
|
419 |
+
"content": [
|
420 |
+
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
|
421 |
+
{"type": "text", "text": "What animal is on the candy?"}
|
422 |
+
]
|
423 |
+
}
|
424 |
+
]
|
425 |
+
|
426 |
+
output = pipe(text=messages, max_new_tokens=200)
|
427 |
+
print(output[0]["generated_text"][-1]["content"])
|
428 |
+
# Okay, let's take a look!
|
429 |
+
# Based on the image, the animal on the candy is a **turtle**.
|
430 |
+
# You can see the shell shape and the head and legs.
|
431 |
+
```
|
432 |
+
|
433 |
+
#### Running the model on a single/multi GPU
|
434 |
+
|
435 |
+
```python
|
436 |
+
# pip install accelerate
|
437 |
+
|
438 |
+
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
|
439 |
+
from PIL import Image
|
440 |
+
import requests
|
441 |
+
import torch
|
442 |
+
|
443 |
+
model_id = "google/gemma-3-27b-it"
|
444 |
+
|
445 |
+
model = Gemma3ForConditionalGeneration.from_pretrained(
|
446 |
+
model_id, device_map="auto"
|
447 |
+
).eval()
|
448 |
+
|
449 |
+
processor = AutoProcessor.from_pretrained(model_id)
|
450 |
+
|
451 |
+
messages = [
|
452 |
+
{
|
453 |
+
"role": "system",
|
454 |
+
"content": [{"type": "text", "text": "You are a helpful assistant."}]
|
455 |
+
},
|
456 |
+
{
|
457 |
+
"role": "user",
|
458 |
+
"content": [
|
459 |
+
{"type": "image", "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"},
|
460 |
+
{"type": "text", "text": "Describe this image in detail."}
|
461 |
+
]
|
462 |
+
}
|
463 |
+
]
|
464 |
+
|
465 |
+
inputs = processor.apply_chat_template(
|
466 |
+
messages, add_generation_prompt=True, tokenize=True,
|
467 |
+
return_dict=True, return_tensors="pt"
|
468 |
+
).to(model.device, dtype=torch.bfloat16)
|
469 |
+
|
470 |
+
input_len = inputs["input_ids"].shape[-1]
|
471 |
+
|
472 |
+
with torch.inference_mode():
|
473 |
+
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
|
474 |
+
generation = generation[0][input_len:]
|
475 |
+
|
476 |
+
decoded = processor.decode(generation, skip_special_tokens=True)
|
477 |
+
print(decoded)
|
478 |
+
|
479 |
+
# **Overall Impression:** The image is a close-up shot of a vibrant garden scene,
|
480 |
+
# focusing on a cluster of pink cosmos flowers and a busy bumblebee.
|
481 |
+
# It has a slightly soft, natural feel, likely captured in daylight.
|
482 |
+
```
|
483 |
+
|
484 |
+
### Citation
|
485 |
+
|
486 |
+
```none
|
487 |
+
@article{gemma_2025,
|
488 |
+
title={Gemma 3},
|
489 |
+
url={https://goo.gle/Gemma3Report},
|
490 |
+
publisher={Kaggle},
|
491 |
+
author={Gemma Team},
|
492 |
+
year={2025}
|
493 |
+
}
|
494 |
+
```
|
495 |
+
|
496 |
+
## Model Data
|
497 |
+
|
498 |
+
Data used for model training and how the data was processed.
|
499 |
+
|
500 |
+
### Training Dataset
|
501 |
+
|
502 |
+
These models were trained on a dataset of text data that includes a wide variety
|
503 |
+
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
|
504 |
+
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
|
505 |
+
1B with 2 trillion tokens. Here are the key components:
|
506 |
+
|
507 |
+
- Web Documents: A diverse collection of web text ensures the model is
|
508 |
+
exposed to a broad range of linguistic styles, topics, and vocabulary. The
|
509 |
+
training dataset includes content in over 140 languages.
|
510 |
+
- Code: Exposing the model to code helps it to learn the syntax and
|
511 |
+
patterns of programming languages, which improves its ability to generate
|
512 |
+
code and understand code-related questions.
|
513 |
+
- Mathematics: Training on mathematical text helps the model learn logical
|
514 |
+
reasoning, symbolic representation, and to address mathematical queries.
|
515 |
+
- Images: A wide range of images enables the model to perform image
|
516 |
+
analysis and visual data extraction tasks.
|
517 |
+
|
518 |
+
The combination of these diverse data sources is crucial for training a powerful
|
519 |
+
multimodal model that can handle a wide variety of different tasks and data
|
520 |
+
formats.
|
521 |
+
|
522 |
+
### Data Preprocessing
|
523 |
+
|
524 |
+
Here are the key data cleaning and filtering methods applied to the training
|
525 |
+
data:
|
526 |
+
|
527 |
+
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
|
528 |
+
was applied at multiple stages in the data preparation process to ensure
|
529 |
+
the exclusion of harmful and illegal content.
|
530 |
+
- Sensitive Data Filtering: As part of making Gemma pre-trained models
|
531 |
+
safe and reliable, automated techniques were used to filter out certain
|
532 |
+
personal information and other sensitive data from training sets.
|
533 |
+
- Additional methods: Filtering based on content quality and safety in
|
534 |
+
line with [our policies][safety-policies].
|
535 |
+
|
536 |
+
## Implementation Information
|
537 |
+
|
538 |
+
Details about the model internals.
|
539 |
+
|
540 |
+
### Hardware
|
541 |
+
|
542 |
+
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
|
543 |
+
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
|
544 |
+
computational power. TPUs, designed specifically for matrix operations common in
|
545 |
+
machine learning, offer several advantages in this domain:
|
546 |
+
|
547 |
+
- Performance: TPUs are specifically designed to handle the massive
|
548 |
+
computations involved in training VLMs. They can speed up training
|
549 |
+
considerably compared to CPUs.
|
550 |
+
- Memory: TPUs often come with large amounts of high-bandwidth memory,
|
551 |
+
allowing for the handling of large models and batch sizes during training.
|
552 |
+
This can lead to better model quality.
|
553 |
+
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
|
554 |
+
solution for handling the growing complexity of large foundation models.
|
555 |
+
You can distribute training across multiple TPU devices for faster and more
|
556 |
+
efficient processing.
|
557 |
+
- Cost-effectiveness: In many scenarios, TPUs can provide a more
|
558 |
+
cost-effective solution for training large models compared to CPU-based
|
559 |
+
infrastructure, especially when considering the time and resources saved
|
560 |
+
due to faster training.
|
561 |
+
- These advantages are aligned with
|
562 |
+
[Google's commitments to operate sustainably][sustainability].
|
563 |
+
|
564 |
+
### Software
|
565 |
+
|
566 |
+
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
|
567 |
+
|
568 |
+
JAX allows researchers to take advantage of the latest generation of hardware,
|
569 |
+
including TPUs, for faster and more efficient training of large models. ML
|
570 |
+
Pathways is Google's latest effort to build artificially intelligent systems
|
571 |
+
capable of generalizing across multiple tasks. This is specially suitable for
|
572 |
+
foundation models, including large language models like these ones.
|
573 |
+
|
574 |
+
Together, JAX and ML Pathways are used as described in the
|
575 |
+
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
|
576 |
+
controller' programming model of Jax and Pathways allows a single Python
|
577 |
+
process to orchestrate the entire training run, dramatically simplifying the
|
578 |
+
development workflow."*
|
579 |
+
|
580 |
+
## Evaluation
|
581 |
+
|
582 |
+
Model evaluation metrics and results.
|
583 |
+
|
584 |
+
### Benchmark Results
|
585 |
+
|
586 |
+
These models were evaluated against a large collection of different datasets and
|
587 |
+
metrics to cover different aspects of text generation:
|
588 |
+
|
589 |
+
#### Reasoning and factuality
|
590 |
+
|
591 |
+
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|
592 |
+
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
|
593 |
+
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
|
594 |
+
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
|
595 |
+
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
|
596 |
+
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
|
597 |
+
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
|
598 |
+
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
|
599 |
+
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
|
600 |
+
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
|
601 |
+
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
|
602 |
+
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
|
603 |
+
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
|
604 |
+
|
605 |
+
[hellaswag]: https://arxiv.org/abs/1905.07830
|
606 |
+
[boolq]: https://arxiv.org/abs/1905.10044
|
607 |
+
[piqa]: https://arxiv.org/abs/1911.11641
|
608 |
+
[socialiqa]: https://arxiv.org/abs/1904.09728
|
609 |
+
[triviaqa]: https://arxiv.org/abs/1705.03551
|
610 |
+
[naturalq]: https://github.com/google-research-datasets/natural-questions
|
611 |
+
[arc]: https://arxiv.org/abs/1911.01547
|
612 |
+
[winogrande]: https://arxiv.org/abs/1907.10641
|
613 |
+
[bbh]: https://paperswithcode.com/dataset/bbh
|
614 |
+
[drop]: https://arxiv.org/abs/1903.00161
|
615 |
+
|
616 |
+
#### STEM and code
|
617 |
+
|
618 |
+
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|
619 |
+
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
|
620 |
+
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
|
621 |
+
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
|
622 |
+
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
|
623 |
+
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
|
624 |
+
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
|
625 |
+
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
|
626 |
+
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
|
627 |
+
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
|
628 |
+
|
629 |
+
[mmlu]: https://arxiv.org/abs/2009.03300
|
630 |
+
[agieval]: https://arxiv.org/abs/2304.06364
|
631 |
+
[math]: https://arxiv.org/abs/2103.03874
|
632 |
+
[gsm8k]: https://arxiv.org/abs/2110.14168
|
633 |
+
[gpqa]: https://arxiv.org/abs/2311.12022
|
634 |
+
[mbpp]: https://arxiv.org/abs/2108.07732
|
635 |
+
[humaneval]: https://arxiv.org/abs/2107.03374
|
636 |
+
|
637 |
+
#### Multilingual
|
638 |
+
|
639 |
+
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|
640 |
+
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
|
641 |
+
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
|
642 |
+
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
|
643 |
+
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
|
644 |
+
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
|
645 |
+
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
|
646 |
+
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
|
647 |
+
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
|
648 |
+
|
649 |
+
[mgsm]: https://arxiv.org/abs/2210.03057
|
650 |
+
[flores]: https://arxiv.org/abs/2106.03193
|
651 |
+
[xquad]: https://arxiv.org/abs/1910.11856v3
|
652 |
+
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
|
653 |
+
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
|
654 |
+
[eclektic]: https://arxiv.org/abs/2502.21228
|
655 |
+
[indicgenbench]: https://arxiv.org/abs/2404.16816
|
656 |
+
|
657 |
+
#### Multimodal
|
658 |
+
|
659 |
+
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|
660 |
+
| ------------------------------ |:-------------:|:--------------:|:--------------:|
|
661 |
+
| [COCOcap][coco-cap] | 102 | 111 | 116 |
|
662 |
+
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
|
663 |
+
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
|
664 |
+
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
|
665 |
+
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
|
666 |
+
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
|
667 |
+
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
|
668 |
+
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
|
669 |
+
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
|
670 |
+
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
|
671 |
+
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
|
672 |
+
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
|
673 |
+
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
|
674 |
+
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
|
675 |
+
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
|
676 |
+
|
677 |
+
[coco-cap]: https://cocodataset.org/#home
|
678 |
+
[docvqa]: https://www.docvqa.org/
|
679 |
+
[info-vqa]: https://arxiv.org/abs/2104.12756
|
680 |
+
[mmmu]: https://arxiv.org/abs/2311.16502
|
681 |
+
[textvqa]: https://textvqa.org/
|
682 |
+
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
|
683 |
+
[remi]: https://arxiv.org/html/2406.09175v1
|
684 |
+
[ai2d]: https://allenai.org/data/diagrams
|
685 |
+
[chartqa]: https://arxiv.org/abs/2203.10244
|
686 |
+
[vqav2]: https://visualqa.org/index.html
|
687 |
+
[blinkvqa]: https://arxiv.org/abs/2404.12390
|
688 |
+
[okvqa]: https://okvqa.allenai.org/
|
689 |
+
[tallyqa]: https://arxiv.org/abs/1810.12440
|
690 |
+
[ss-vqa]: https://arxiv.org/abs/1908.02660
|
691 |
+
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
|
692 |
+
|
693 |
+
## Ethics and Safety
|
694 |
+
|
695 |
+
Ethics and safety evaluation approach and results.
|
696 |
+
|
697 |
+
### Evaluation Approach
|
698 |
+
|
699 |
+
Our evaluation methods include structured evaluations and internal red-teaming
|
700 |
+
testing of relevant content policies. Red-teaming was conducted by a number of
|
701 |
+
different teams, each with different goals and human evaluation metrics. These
|
702 |
+
models were evaluated against a number of different categories relevant to
|
703 |
+
ethics and safety, including:
|
704 |
+
|
705 |
+
- **Child Safety**: Evaluation of text-to-text and image to text prompts
|
706 |
+
covering child safety policies, including child sexual abuse and
|
707 |
+
exploitation.
|
708 |
+
- **Content Safety:** Evaluation of text-to-text and image to text prompts
|
709 |
+
covering safety policies including, harassment, violence and gore, and hate
|
710 |
+
speech.
|
711 |
+
- **Representational Harms**: Evaluation of text-to-text and image to text
|
712 |
+
prompts covering safety policies including bias, stereotyping, and harmful
|
713 |
+
associations or inaccuracies.
|
714 |
+
|
715 |
+
In addition to development level evaluations, we conduct "assurance
|
716 |
+
evaluations" which are our 'arms-length' internal evaluations for responsibility
|
717 |
+
governance decision making. They are conducted separately from the model
|
718 |
+
development team, to inform decision making about release. High level findings
|
719 |
+
are fed back to the model team, but prompt sets are held-out to prevent
|
720 |
+
overfitting and preserve the results' ability to inform decision making.
|
721 |
+
Assurance evaluation results are reported to our Responsibility & Safety Council
|
722 |
+
as part of release review.
|
723 |
+
|
724 |
+
### Evaluation Results
|
725 |
+
|
726 |
+
For all areas of safety testing, we saw major improvements in the categories of
|
727 |
+
child safety, content safety, and representational harms relative to previous
|
728 |
+
Gemma models. All testing was conducted without safety filters to evaluate the
|
729 |
+
model capabilities and behaviors. For both text-to-text and image-to-text, and
|
730 |
+
across all model sizes, the model produced minimal policy violations, and showed
|
731 |
+
significant improvements over previous Gemma models' performance with respect
|
732 |
+
to ungrounded inferences. A limitation of our evaluations was they included only
|
733 |
+
English language prompts.
|
734 |
+
|
735 |
+
## Usage and Limitations
|
736 |
+
|
737 |
+
These models have certain limitations that users should be aware of.
|
738 |
+
|
739 |
+
### Intended Usage
|
740 |
+
|
741 |
+
Open vision-language models (VLMs) models have a wide range of applications
|
742 |
+
across various industries and domains. The following list of potential uses is
|
743 |
+
not comprehensive. The purpose of this list is to provide contextual information
|
744 |
+
about the possible use-cases that the model creators considered as part of model
|
745 |
+
training and development.
|
746 |
+
|
747 |
+
- Content Creation and Communication
|
748 |
+
- Text Generation: These models can be used to generate creative text
|
749 |
+
formats such as poems, scripts, code, marketing copy, and email drafts.
|
750 |
+
- Chatbots and Conversational AI: Power conversational interfaces
|
751 |
+
for customer service, virtual assistants, or interactive applications.
|
752 |
+
- Text Summarization: Generate concise summaries of a text corpus,
|
753 |
+
research papers, or reports.
|
754 |
+
- Image Data Extraction: These models can be used to extract,
|
755 |
+
interpret, and summarize visual data for text communications.
|
756 |
+
- Research and Education
|
757 |
+
- Natural Language Processing (NLP) and VLM Research: These
|
758 |
+
models can serve as a foundation for researchers to experiment with VLM
|
759 |
+
and NLP techniques, develop algorithms, and contribute to the
|
760 |
+
advancement of the field.
|
761 |
+
- Language Learning Tools: Support interactive language learning
|
762 |
+
experiences, aiding in grammar correction or providing writing practice.
|
763 |
+
- Knowledge Exploration: Assist researchers in exploring large
|
764 |
+
bodies of text by generating summaries or answering questions about
|
765 |
+
specific topics.
|
766 |
+
|
767 |
+
### Limitations
|
768 |
+
|
769 |
+
- Training Data
|
770 |
+
- The quality and diversity of the training data significantly
|
771 |
+
influence the model's capabilities. Biases or gaps in the training data
|
772 |
+
can lead to limitations in the model's responses.
|
773 |
+
- The scope of the training dataset determines the subject areas
|
774 |
+
the model can handle effectively.
|
775 |
+
- Context and Task Complexity
|
776 |
+
- Models are better at tasks that can be framed with clear
|
777 |
+
prompts and instructions. Open-ended or highly complex tasks might be
|
778 |
+
challenging.
|
779 |
+
- A model's performance can be influenced by the amount of context
|
780 |
+
provided (longer context generally leads to better outputs, up to a
|
781 |
+
certain point).
|
782 |
+
- Language Ambiguity and Nuance
|
783 |
+
- Natural language is inherently complex. Models might struggle
|
784 |
+
to grasp subtle nuances, sarcasm, or figurative language.
|
785 |
+
- Factual Accuracy
|
786 |
+
- Models generate responses based on information they learned
|
787 |
+
from their training datasets, but they are not knowledge bases. They
|
788 |
+
may generate incorrect or outdated factual statements.
|
789 |
+
- Common Sense
|
790 |
+
- Models rely on statistical patterns in language. They might
|
791 |
+
lack the ability to apply common sense reasoning in certain situations.
|
792 |
+
|
793 |
+
### Ethical Considerations and Risks
|
794 |
+
|
795 |
+
The development of vision-language models (VLMs) raises several ethical
|
796 |
+
concerns. In creating an open model, we have carefully considered the following:
|
797 |
+
|
798 |
+
- Bias and Fairness
|
799 |
+
- VLMs trained on large-scale, real-world text and image data can
|
800 |
+
reflect socio-cultural biases embedded in the training material. These
|
801 |
+
models underwent careful scrutiny, input data pre-processing described
|
802 |
+
and posterior evaluations reported in this card.
|
803 |
+
- Misinformation and Misuse
|
804 |
+
- VLMs can be misused to generate text that is false, misleading,
|
805 |
+
or harmful.
|
806 |
+
- Guidelines are provided for responsible use with the model, see the
|
807 |
+
[Responsible Generative AI Toolkit][rai-toolkit].
|
808 |
+
- Transparency and Accountability:
|
809 |
+
- This model card summarizes details on the models' architecture,
|
810 |
+
capabilities, limitations, and evaluation processes.
|
811 |
+
- A responsibly developed open model offers the opportunity to
|
812 |
+
share innovation by making VLM technology accessible to developers and
|
813 |
+
researchers across the AI ecosystem.
|
814 |
+
|
815 |
+
Risks identified and mitigations:
|
816 |
+
|
817 |
+
- **Perpetuation of biases**: It's encouraged to perform continuous
|
818 |
+
monitoring (using evaluation metrics, human review) and the exploration of
|
819 |
+
de-biasing techniques during model training, fine-tuning, and other use
|
820 |
+
cases.
|
821 |
+
- **Generation of harmful content**: Mechanisms and guidelines for content
|
822 |
+
safety are essential. Developers are encouraged to exercise caution and
|
823 |
+
implement appropriate content safety safeguards based on their specific
|
824 |
+
product policies and application use cases.
|
825 |
+
- **Misuse for malicious purposes**: Technical limitations and developer
|
826 |
+
and end-user education can help mitigate against malicious applications of
|
827 |
+
VLMs. Educational resources and reporting mechanisms for users to flag
|
828 |
+
misuse are provided. Prohibited uses of Gemma models are outlined in the
|
829 |
+
[Gemma Prohibited Use Policy][prohibited-use].
|
830 |
+
- **Privacy violations**: Models were trained on data filtered for removal
|
831 |
+
of certain personal information and other sensitive data. Developers are
|
832 |
+
encouraged to adhere to privacy regulations with privacy-preserving
|
833 |
+
techniques.
|
834 |
+
|
835 |
+
### Benefits
|
836 |
+
|
837 |
+
At the time of release, this family of models provides high-performance open
|
838 |
+
vision-language model implementations designed from the ground up for
|
839 |
+
responsible AI development compared to similarly sized models.
|
840 |
+
|
841 |
+
Using the benchmark evaluation metrics described in this document, these models
|
842 |
+
have shown to provide superior performance to other, comparably-sized open model
|
843 |
+
alternatives.
|
844 |
+
|
845 |
+
[g3-tech-report]: https://goo.gle/Gemma3Report
|
846 |
+
[rai-toolkit]: https://ai.google.dev/responsible
|
847 |
+
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
|
848 |
+
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
|
849 |
+
[terms]: https://ai.google.dev/gemma/terms
|
850 |
+
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
|
851 |
+
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
|
852 |
+
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
|
853 |
+
[sustainability]: https://sustainability.google/operating-sustainably/
|
854 |
+
[jax]: https://github.com/jax-ml/jax
|
855 |
+
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
|
856 |
+
[sustainability]: https://sustainability.google/operating-sustainably/
|