gaunernst commited on
Commit
9f36ff0
·
verified ·
1 Parent(s): 37433d6

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,477 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/gemma-3-4b-it
3
+ license: gemma
4
+ tags:
5
+ - gemma3
6
+ - gemma
7
+ - google
8
+ pipeline_tag: image-text-to-text
9
+ ---
10
+
11
+ # Gemma 3 4B Instruction-tuned QAT AutoAWQ
12
+
13
+ This checkpoint was converted from https://huggingface.co/google/gemma-3-4b-it-qat-q4_0-gguf to AutoAWQ format and BF16 dtype (hence, not lossess). The vision tower was transplanted from https://huggingface.co/google/gemma-3-4b-it.
14
+
15
+ Below is the original model card.
16
+
17
+ # Gemma 3 model card
18
+
19
+ **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
20
+
21
+ > [!Note]
22
+ > This repository corresponds to the 4B **instruction-tuned** version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT).
23
+ > The GGUF corresponds to Q4_0 quantization.
24
+ >
25
+ > Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements
26
+ > to load the model.
27
+ >
28
+ > You can find the half-precision version [here](https://huggingface.co/google/gemma-3-4b-it).
29
+
30
+
31
+ **Resources and Technical Documentation**:
32
+
33
+ * [Gemma 3 Technical Report][g3-tech-report]
34
+ * [Responsible Generative AI Toolkit][rai-toolkit]
35
+ * [Gemma on Kaggle][kaggle-gemma]
36
+ * [Gemma on Vertex Model Garden][vertex-mg-gemma3]
37
+
38
+ **Terms of Use**: [Terms][terms]
39
+
40
+ **Authors**: Google DeepMind
41
+
42
+ ## Model Information
43
+
44
+ Summary description and brief definition of inputs and outputs.
45
+
46
+ ### Description
47
+
48
+ Gemma is a family of lightweight, state-of-the-art open models from Google,
49
+ built from the same research and technology used to create the Gemini models.
50
+ Gemma 3 models are multimodal, handling text and image input and generating text
51
+ output, with open weights for both pre-trained variants and instruction-tuned
52
+ variants. Gemma 3 has a large, 128K context window, multilingual support in over
53
+ 140 languages, and is available in more sizes than previous versions. Gemma 3
54
+ models are well-suited for a variety of text generation and image understanding
55
+ tasks, including question answering, summarization, and reasoning. Their
56
+ relatively small size makes it possible to deploy them in environments with
57
+ limited resources such as laptops, desktops or your own cloud infrastructure,
58
+ democratizing access to state of the art AI models and helping foster innovation
59
+ for everyone.
60
+
61
+ ### Inputs and outputs
62
+
63
+ - **Input:**
64
+ - Text string, such as a question, a prompt, or a document to be summarized
65
+ - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
66
+ each
67
+ - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
68
+ 32K tokens for the 1B size
69
+
70
+ - **Output:**
71
+ - Generated text in response to the input, such as an answer to a
72
+ question, analysis of image content, or a summary of a document
73
+ - Total output context of 8192 tokens
74
+
75
+ ### Usage
76
+
77
+ Below, there are some code snippets on how to get quickly started with running the model.
78
+
79
+ **llama.cpp (text-only)**
80
+
81
+ ```sh
82
+ ./llama-cli -hf google/gemma-3-4b-it-qat-q4_0-gguf -p "Write a poem about the Kraken."
83
+ ```
84
+
85
+ **llama.cpp (image input)**
86
+
87
+ ```sh
88
+ wget https://github.com/bebechien/gemma/blob/main/surprise.png?raw=true -O ~/Downloads/surprise.png
89
+ ./llama-gemma3-cli -hf google/gemma-3-4b-it-qat-q4_0-gguf -p "Describe this image." --image ~/Downloads/surprise.png
90
+ ```
91
+
92
+ **ollama (text only)**
93
+
94
+ Using GGUFs with Ollama via Hugging Face does not support image inputs at the moment. Please check the [docs on running gated repositories](https://huggingface.co/docs/hub/en/ollama#run-private-ggufs-from-the-hugging-face-hub).
95
+
96
+ ```sh
97
+ ollama run hf.co/google/gemma-3-4b-it-qat-q4_0-gguf
98
+ ```
99
+
100
+ ### Citation
101
+
102
+ ```none
103
+ @article{gemma_2025,
104
+ title={Gemma 3},
105
+ url={https://goo.gle/Gemma3Report},
106
+ publisher={Kaggle},
107
+ author={Gemma Team},
108
+ year={2025}
109
+ }
110
+ ```
111
+
112
+ ## Model Data
113
+
114
+ Data used for model training and how the data was processed.
115
+
116
+ ### Training Dataset
117
+
118
+ These models were trained on a dataset of text data that includes a wide variety
119
+ of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
120
+ trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
121
+ 1B with 2 trillion tokens. Here are the key components:
122
+
123
+ - Web Documents: A diverse collection of web text ensures the model is
124
+ exposed to a broad range of linguistic styles, topics, and vocabulary. The
125
+ training dataset includes content in over 140 languages.
126
+ - Code: Exposing the model to code helps it to learn the syntax and
127
+ patterns of programming languages, which improves its ability to generate
128
+ code and understand code-related questions.
129
+ - Mathematics: Training on mathematical text helps the model learn logical
130
+ reasoning, symbolic representation, and to address mathematical queries.
131
+ - Images: A wide range of images enables the model to perform image
132
+ analysis and visual data extraction tasks.
133
+
134
+ The combination of these diverse data sources is crucial for training a powerful
135
+ multimodal model that can handle a wide variety of different tasks and data
136
+ formats.
137
+
138
+ ### Data Preprocessing
139
+
140
+ Here are the key data cleaning and filtering methods applied to the training
141
+ data:
142
+
143
+ - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
144
+ was applied at multiple stages in the data preparation process to ensure
145
+ the exclusion of harmful and illegal content.
146
+ - Sensitive Data Filtering: As part of making Gemma pre-trained models
147
+ safe and reliable, automated techniques were used to filter out certain
148
+ personal information and other sensitive data from training sets.
149
+ - Additional methods: Filtering based on content quality and safety in
150
+ line with [our policies][safety-policies].
151
+
152
+ ## Implementation Information
153
+
154
+ Details about the model internals.
155
+
156
+ ### Hardware
157
+
158
+ Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
159
+ TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
160
+ computational power. TPUs, designed specifically for matrix operations common in
161
+ machine learning, offer several advantages in this domain:
162
+
163
+ - Performance: TPUs are specifically designed to handle the massive
164
+ computations involved in training VLMs. They can speed up training
165
+ considerably compared to CPUs.
166
+ - Memory: TPUs often come with large amounts of high-bandwidth memory,
167
+ allowing for the handling of large models and batch sizes during training.
168
+ This can lead to better model quality.
169
+ - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
170
+ solution for handling the growing complexity of large foundation models.
171
+ You can distribute training across multiple TPU devices for faster and more
172
+ efficient processing.
173
+ - Cost-effectiveness: In many scenarios, TPUs can provide a more
174
+ cost-effective solution for training large models compared to CPU-based
175
+ infrastructure, especially when considering the time and resources saved
176
+ due to faster training.
177
+ - These advantages are aligned with
178
+ [Google's commitments to operate sustainably][sustainability].
179
+
180
+ ### Software
181
+
182
+ Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
183
+
184
+ JAX allows researchers to take advantage of the latest generation of hardware,
185
+ including TPUs, for faster and more efficient training of large models. ML
186
+ Pathways is Google's latest effort to build artificially intelligent systems
187
+ capable of generalizing across multiple tasks. This is specially suitable for
188
+ foundation models, including large language models like these ones.
189
+
190
+ Together, JAX and ML Pathways are used as described in the
191
+ [paper about the Gemini family of models][gemini-2-paper]; *"the 'single
192
+ controller' programming model of Jax and Pathways allows a single Python
193
+ process to orchestrate the entire training run, dramatically simplifying the
194
+ development workflow."*
195
+
196
+ ## Evaluation
197
+
198
+ > [!Note]
199
+ > The evaluation in this section correspond to the original checkpoint, not the QAT checkpoint.
200
+ >
201
+
202
+ Model evaluation metrics and results.
203
+
204
+ ### Benchmark Results
205
+
206
+ These models were evaluated against a large collection of different datasets and
207
+ metrics to cover different aspects of text generation:
208
+
209
+ #### Reasoning and factuality
210
+
211
+ | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
212
+ | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
213
+ | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
214
+ | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
215
+ | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
216
+ | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
217
+ | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
218
+ | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
219
+ | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
220
+ | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
221
+ | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
222
+ | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
223
+ | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
224
+
225
+ [hellaswag]: https://arxiv.org/abs/1905.07830
226
+ [boolq]: https://arxiv.org/abs/1905.10044
227
+ [piqa]: https://arxiv.org/abs/1911.11641
228
+ [socialiqa]: https://arxiv.org/abs/1904.09728
229
+ [triviaqa]: https://arxiv.org/abs/1705.03551
230
+ [naturalq]: https://github.com/google-research-datasets/natural-questions
231
+ [arc]: https://arxiv.org/abs/1911.01547
232
+ [winogrande]: https://arxiv.org/abs/1907.10641
233
+ [bbh]: https://paperswithcode.com/dataset/bbh
234
+ [drop]: https://arxiv.org/abs/1903.00161
235
+
236
+ #### STEM and code
237
+
238
+ | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
239
+ | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
240
+ | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
241
+ | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
242
+ | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
243
+ | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
244
+ | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
245
+ | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
246
+ | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
247
+ | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
248
+
249
+ [mmlu]: https://arxiv.org/abs/2009.03300
250
+ [agieval]: https://arxiv.org/abs/2304.06364
251
+ [math]: https://arxiv.org/abs/2103.03874
252
+ [gsm8k]: https://arxiv.org/abs/2110.14168
253
+ [gpqa]: https://arxiv.org/abs/2311.12022
254
+ [mbpp]: https://arxiv.org/abs/2108.07732
255
+ [humaneval]: https://arxiv.org/abs/2107.03374
256
+
257
+ #### Multilingual
258
+
259
+ | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
260
+ | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
261
+ | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
262
+ | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
263
+ | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
264
+ | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
265
+ | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
266
+ | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
267
+ | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
268
+
269
+ [mgsm]: https://arxiv.org/abs/2210.03057
270
+ [flores]: https://arxiv.org/abs/2106.03193
271
+ [xquad]: https://arxiv.org/abs/1910.11856v3
272
+ [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
273
+ [wmt24pp]: https://arxiv.org/abs/2502.12404v1
274
+ [eclektic]: https://arxiv.org/abs/2502.21228
275
+ [indicgenbench]: https://arxiv.org/abs/2404.16816
276
+
277
+ #### Multimodal
278
+
279
+ | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
280
+ | ------------------------------ |:-------------:|:--------------:|:--------------:|
281
+ | [COCOcap][coco-cap] | 102 | 111 | 116 |
282
+ | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
283
+ | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
284
+ | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
285
+ | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
286
+ | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
287
+ | [ReMI][remi] | 27.3 | 38.5 | 44.8 |
288
+ | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
289
+ | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
290
+ | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
291
+ | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
292
+ | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
293
+ | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
294
+ | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
295
+ | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
296
+
297
+ [coco-cap]: https://cocodataset.org/#home
298
+ [docvqa]: https://www.docvqa.org/
299
+ [info-vqa]: https://arxiv.org/abs/2104.12756
300
+ [mmmu]: https://arxiv.org/abs/2311.16502
301
+ [textvqa]: https://textvqa.org/
302
+ [realworldqa]: https://paperswithcode.com/dataset/realworldqa
303
+ [remi]: https://arxiv.org/html/2406.09175v1
304
+ [ai2d]: https://allenai.org/data/diagrams
305
+ [chartqa]: https://arxiv.org/abs/2203.10244
306
+ [vqav2]: https://visualqa.org/index.html
307
+ [blinkvqa]: https://arxiv.org/abs/2404.12390
308
+ [okvqa]: https://okvqa.allenai.org/
309
+ [tallyqa]: https://arxiv.org/abs/1810.12440
310
+ [ss-vqa]: https://arxiv.org/abs/1908.02660
311
+ [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
312
+
313
+ ## Ethics and Safety
314
+
315
+ Ethics and safety evaluation approach and results.
316
+
317
+ ### Evaluation Approach
318
+
319
+ Our evaluation methods include structured evaluations and internal red-teaming
320
+ testing of relevant content policies. Red-teaming was conducted by a number of
321
+ different teams, each with different goals and human evaluation metrics. These
322
+ models were evaluated against a number of different categories relevant to
323
+ ethics and safety, including:
324
+
325
+ - **Child Safety**: Evaluation of text-to-text and image to text prompts
326
+ covering child safety policies, including child sexual abuse and
327
+ exploitation.
328
+ - **Content Safety:** Evaluation of text-to-text and image to text prompts
329
+ covering safety policies including, harassment, violence and gore, and hate
330
+ speech.
331
+ - **Representational Harms**: Evaluation of text-to-text and image to text
332
+ prompts covering safety policies including bias, stereotyping, and harmful
333
+ associations or inaccuracies.
334
+
335
+ In addition to development level evaluations, we conduct "assurance
336
+ evaluations" which are our 'arms-length' internal evaluations for responsibility
337
+ governance decision making. They are conducted separately from the model
338
+ development team, to inform decision making about release. High level findings
339
+ are fed back to the model team, but prompt sets are held-out to prevent
340
+ overfitting and preserve the results' ability to inform decision making.
341
+ Assurance evaluation results are reported to our Responsibility & Safety Council
342
+ as part of release review.
343
+
344
+ ### Evaluation Results
345
+
346
+ For all areas of safety testing, we saw major improvements in the categories of
347
+ child safety, content safety, and representational harms relative to previous
348
+ Gemma models. All testing was conducted without safety filters to evaluate the
349
+ model capabilities and behaviors. For both text-to-text and image-to-text, and
350
+ across all model sizes, the model produced minimal policy violations, and showed
351
+ significant improvements over previous Gemma models' performance with respect
352
+ to ungrounded inferences. A limitation of our evaluations was they included only
353
+ English language prompts.
354
+
355
+ ## Usage and Limitations
356
+
357
+ These models have certain limitations that users should be aware of.
358
+
359
+ ### Intended Usage
360
+
361
+ Open vision-language models (VLMs) models have a wide range of applications
362
+ across various industries and domains. The following list of potential uses is
363
+ not comprehensive. The purpose of this list is to provide contextual information
364
+ about the possible use-cases that the model creators considered as part of model
365
+ training and development.
366
+
367
+ - Content Creation and Communication
368
+ - Text Generation: These models can be used to generate creative text
369
+ formats such as poems, scripts, code, marketing copy, and email drafts.
370
+ - Chatbots and Conversational AI: Power conversational interfaces
371
+ for customer service, virtual assistants, or interactive applications.
372
+ - Text Summarization: Generate concise summaries of a text corpus,
373
+ research papers, or reports.
374
+ - Image Data Extraction: These models can be used to extract,
375
+ interpret, and summarize visual data for text communications.
376
+ - Research and Education
377
+ - Natural Language Processing (NLP) and VLM Research: These
378
+ models can serve as a foundation for researchers to experiment with VLM
379
+ and NLP techniques, develop algorithms, and contribute to the
380
+ advancement of the field.
381
+ - Language Learning Tools: Support interactive language learning
382
+ experiences, aiding in grammar correction or providing writing practice.
383
+ - Knowledge Exploration: Assist researchers in exploring large
384
+ bodies of text by generating summaries or answering questions about
385
+ specific topics.
386
+
387
+ ### Limitations
388
+
389
+ - Training Data
390
+ - The quality and diversity of the training data significantly
391
+ influence the model's capabilities. Biases or gaps in the training data
392
+ can lead to limitations in the model's responses.
393
+ - The scope of the training dataset determines the subject areas
394
+ the model can handle effectively.
395
+ - Context and Task Complexity
396
+ - Models are better at tasks that can be framed with clear
397
+ prompts and instructions. Open-ended or highly complex tasks might be
398
+ challenging.
399
+ - A model's performance can be influenced by the amount of context
400
+ provided (longer context generally leads to better outputs, up to a
401
+ certain point).
402
+ - Language Ambiguity and Nuance
403
+ - Natural language is inherently complex. Models might struggle
404
+ to grasp subtle nuances, sarcasm, or figurative language.
405
+ - Factual Accuracy
406
+ - Models generate responses based on information they learned
407
+ from their training datasets, but they are not knowledge bases. They
408
+ may generate incorrect or outdated factual statements.
409
+ - Common Sense
410
+ - Models rely on statistical patterns in language. They might
411
+ lack the ability to apply common sense reasoning in certain situations.
412
+
413
+ ### Ethical Considerations and Risks
414
+
415
+ The development of vision-language models (VLMs) raises several ethical
416
+ concerns. In creating an open model, we have carefully considered the following:
417
+
418
+ - Bias and Fairness
419
+ - VLMs trained on large-scale, real-world text and image data can
420
+ reflect socio-cultural biases embedded in the training material. These
421
+ models underwent careful scrutiny, input data pre-processing described
422
+ and posterior evaluations reported in this card.
423
+ - Misinformation and Misuse
424
+ - VLMs can be misused to generate text that is false, misleading,
425
+ or harmful.
426
+ - Guidelines are provided for responsible use with the model, see the
427
+ [Responsible Generative AI Toolkit][rai-toolkit].
428
+ - Transparency and Accountability:
429
+ - This model card summarizes details on the models' architecture,
430
+ capabilities, limitations, and evaluation processes.
431
+ - A responsibly developed open model offers the opportunity to
432
+ share innovation by making VLM technology accessible to developers and
433
+ researchers across the AI ecosystem.
434
+
435
+ Risks identified and mitigations:
436
+
437
+ - **Perpetuation of biases**: It's encouraged to perform continuous
438
+ monitoring (using evaluation metrics, human review) and the exploration of
439
+ de-biasing techniques during model training, fine-tuning, and other use
440
+ cases.
441
+ - **Generation of harmful content**: Mechanisms and guidelines for content
442
+ safety are essential. Developers are encouraged to exercise caution and
443
+ implement appropriate content safety safeguards based on their specific
444
+ product policies and application use cases.
445
+ - **Misuse for malicious purposes**: Technical limitations and developer
446
+ and end-user education can help mitigate against malicious applications of
447
+ VLMs. Educational resources and reporting mechanisms for users to flag
448
+ misuse are provided. Prohibited uses of Gemma models are outlined in the
449
+ [Gemma Prohibited Use Policy][prohibited-use].
450
+ - **Privacy violations**: Models were trained on data filtered for removal
451
+ of certain personal information and other sensitive data. Developers are
452
+ encouraged to adhere to privacy regulations with privacy-preserving
453
+ techniques.
454
+
455
+ ### Benefits
456
+
457
+ At the time of release, this family of models provides high-performance open
458
+ vision-language model implementations designed from the ground up for
459
+ responsible AI development compared to similarly sized models.
460
+
461
+ Using the benchmark evaluation metrics described in this document, these models
462
+ have shown to provide superior performance to other, comparably-sized open model
463
+ alternatives.
464
+
465
+ [g3-tech-report]: https://goo.gle/Gemma3Report
466
+ [rai-toolkit]: https://ai.google.dev/responsible
467
+ [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
468
+ [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
469
+ [terms]: https://ai.google.dev/gemma/terms
470
+ [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
471
+ [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
472
+ [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
473
+ [sustainability]: https://sustainability.google/operating-sustainably/
474
+ [jax]: https://github.com/jax-ml/jax
475
+ [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
476
+ [sustainability]: https://sustainability.google/operating-sustainably/
477
+ [gemini-2-paper]: https://arxiv.org/abs/2312.11805
added_tokens.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "<image_soft_token>": 262144
3
+ }
chat_template.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "chat_template": "{{ bos_token }}\n{%- if messages[0]['role'] == 'system' -%}\n {%- if messages[0]['content'] is string -%}\n {%- set first_user_prefix = messages[0]['content'] + '\n\n' -%}\n {%- else -%}\n {%- set first_user_prefix = messages[0]['content'][0]['text'] + '\n\n' -%}\n {%- endif -%}\n {%- set loop_messages = messages[1:] -%}\n{%- else -%}\n {%- set first_user_prefix = \"\" -%}\n {%- set loop_messages = messages -%}\n{%- endif -%}\n{%- for message in loop_messages -%}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}\n {{ raise_exception(\"Conversation roles must alternate user/assistant/user/assistant/...\") }}\n {%- endif -%}\n {%- if (message['role'] == 'assistant') -%}\n {%- set role = \"model\" -%}\n {%- else -%}\n {%- set role = message['role'] -%}\n {%- endif -%}\n {{ '<start_of_turn>' + role + '\n' + (first_user_prefix if loop.first else \"\") }}\n {%- if message['content'] is string -%}\n {{ message['content'] | trim }}\n {%- elif message['content'] is iterable -%}\n {%- for item in message['content'] -%}\n {%- if item['type'] == 'image' -%}\n {{ '<start_of_image>' }}\n {%- elif item['type'] == 'text' -%}\n {{ item['text'] | trim }}\n {%- endif -%}\n {%- endfor -%}\n {%- else -%}\n {{ raise_exception(\"Invalid content type\") }}\n {%- endif -%}\n {{ '<end_of_turn>\n' }}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{'<start_of_turn>model\n'}}\n{%- endif -%}\n"
3
+ }
config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Gemma3ForConditionalGeneration"
4
+ ],
5
+ "boi_token_index": 255999,
6
+ "eoi_token_index": 256000,
7
+ "eos_token_id": [
8
+ 1,
9
+ 106
10
+ ],
11
+ "image_token_index": 262144,
12
+ "initializer_range": 0.02,
13
+ "mm_tokens_per_image": 256,
14
+ "model_type": "gemma3",
15
+ "text_config": {
16
+ "hidden_size": 2560,
17
+ "intermediate_size": 10240,
18
+ "model_type": "gemma3_text",
19
+ "num_hidden_layers": 34,
20
+ "rope_scaling": {
21
+ "factor": 8.0,
22
+ "rope_type": "linear"
23
+ },
24
+ "sliding_window": 1024
25
+ },
26
+ "torch_dtype": "bfloat16",
27
+ "transformers_version": "4.50.0.dev0",
28
+ "vision_config": {
29
+ "hidden_size": 1152,
30
+ "image_size": 896,
31
+ "intermediate_size": 4304,
32
+ "model_type": "siglip_vision_model",
33
+ "num_attention_heads": 16,
34
+ "num_hidden_layers": 27,
35
+ "patch_size": 14,
36
+ "vision_use_head": false
37
+ },
38
+ "quantization_config": {
39
+ "bits": 4,
40
+ "group_size": 32,
41
+ "quant_method": "awq",
42
+ "version": "gemm",
43
+ "zero_point": true,
44
+ "modules_to_not_convert": [
45
+ "lm_head",
46
+ "vision_tower"
47
+ ]
48
+ }
49
+ }
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 2,
4
+ "cache_implementation": "hybrid",
5
+ "eos_token_id": [
6
+ 1,
7
+ 106
8
+ ],
9
+ "pad_token_id": 0,
10
+ "transformers_version": "4.50.0.dev0"
11
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b96a73f826c198a3dea9d2fea9f1e0932a91d3bd89e737c2eaf48fd1c0047d2
3
+ size 4038049672
preprocessor_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_pan_and_scan": null,
5
+ "do_rescale": true,
6
+ "do_resize": true,
7
+ "image_mean": [
8
+ 0.5,
9
+ 0.5,
10
+ 0.5
11
+ ],
12
+ "image_processor_type": "Gemma3ImageProcessor",
13
+ "image_seq_length": 256,
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "pan_and_scan_max_num_crops": null,
20
+ "pan_and_scan_min_crop_size": null,
21
+ "pan_and_scan_min_ratio_to_activate": null,
22
+ "processor_class": "Gemma3Processor",
23
+ "resample": 2,
24
+ "rescale_factor": 0.00392156862745098,
25
+ "size": {
26
+ "height": 896,
27
+ "width": 896
28
+ }
29
+ }
processor_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "image_seq_length": 256,
3
+ "processor_class": "Gemma3Processor"
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "boi_token": "<start_of_image>",
3
+ "bos_token": {
4
+ "content": "<bos>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false
9
+ },
10
+ "eoi_token": "<end_of_image>",
11
+ "eos_token": {
12
+ "content": "<eos>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false
17
+ },
18
+ "image_token": "<image_soft_token>",
19
+ "pad_token": {
20
+ "content": "<pad>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "unk_token": {
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4667f2089529e8e7657cfb6d1c19910ae71ff5f28aa7ab2ff2763330affad795
3
+ size 33384568
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
3
+ size 4689074
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff