ConicCat commited on
Commit
389451d
·
verified ·
1 Parent(s): ed267a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -433
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- base_model: google/gemma-3-27b-it
3
  license: gemma
4
  tags:
5
  - gemma3
@@ -7,446 +6,30 @@ tags:
7
  - google
8
  pipeline_tag: image-text-to-text
9
  library_name: transformers
10
- extra_gated_heading: Access Gemma on Hugging Face
11
- extra_gated_prompt: >-
12
- To access Gemma on Hugging Face, you’re required to review and agree to
13
- Google’s usage license. To do this, please ensure you’re logged in to Hugging
14
- Face and click below. Requests are processed immediately.
15
- extra_gated_button_content: Acknowledge license
16
  ---
17
 
18
- # Gemma 3 model card
 
 
19
 
20
- **Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
21
 
22
- > [!Note]
23
- > This repository corresponds to the 27B **instruction-tuned** version of the Gemma 3 model using Quantization Aware Training (QAT).
24
- >
25
- > **The checkpoint in this repository is unquantized, please make sure to quantize with Q4_0 with your favorite tool**
26
- >
27
- > Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements
28
- > to load the model.
29
 
 
 
30
 
31
- **Resources and Technical Documentation**:
32
 
33
- * [Gemma 3 Technical Report][g3-tech-report]
34
- * [Responsible Generative AI Toolkit][rai-toolkit]
35
- * [Gemma on Kaggle][kaggle-gemma]
36
- * [Gemma on Vertex Model Garden][vertex-mg-gemma3]
37
 
38
- **Terms of Use**: [Terms][terms]
39
 
40
- **Authors**: Google DeepMind
41
 
42
- ## Model Information
43
 
44
- Summary description and brief definition of inputs and outputs.
45
-
46
- ### Description
47
-
48
- Gemma is a family of lightweight, state-of-the-art open models from Google,
49
- built from the same research and technology used to create the Gemini models.
50
- Gemma 3 models are multimodal, handling text and image input and generating text
51
- output, with open weights for both pre-trained variants and instruction-tuned
52
- variants. Gemma 3 has a large, 128K context window, multilingual support in over
53
- 140 languages, and is available in more sizes than previous versions. Gemma 3
54
- models are well-suited for a variety of text generation and image understanding
55
- tasks, including question answering, summarization, and reasoning. Their
56
- relatively small size makes it possible to deploy them in environments with
57
- limited resources such as laptops, desktops or your own cloud infrastructure,
58
- democratizing access to state of the art AI models and helping foster innovation
59
- for everyone.
60
-
61
- ### Inputs and outputs
62
-
63
- - **Input:**
64
- - Text string, such as a question, a prompt, or a document to be summarized
65
- - Images, normalized to 896 x 896 resolution and encoded to 256 tokens
66
- each
67
- - Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
68
- 32K tokens for the 1B size
69
-
70
- - **Output:**
71
- - Generated text in response to the input, such as an answer to a
72
- question, analysis of image content, or a summary of a document
73
- - Total output context of 8192 tokens
74
-
75
- ### Citation
76
-
77
- ```none
78
- @article{gemma_2025,
79
- title={Gemma 3},
80
- url={https://goo.gle/Gemma3Report},
81
- publisher={Kaggle},
82
- author={Gemma Team},
83
- year={2025}
84
- }
85
- ```
86
-
87
- ## Model Data
88
-
89
- Data used for model training and how the data was processed.
90
-
91
- ### Training Dataset
92
-
93
- These models were trained on a dataset of text data that includes a wide variety
94
- of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
95
- trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
96
- 1B with 2 trillion tokens. Here are the key components:
97
-
98
- - Web Documents: A diverse collection of web text ensures the model is
99
- exposed to a broad range of linguistic styles, topics, and vocabulary. The
100
- training dataset includes content in over 140 languages.
101
- - Code: Exposing the model to code helps it to learn the syntax and
102
- patterns of programming languages, which improves its ability to generate
103
- code and understand code-related questions.
104
- - Mathematics: Training on mathematical text helps the model learn logical
105
- reasoning, symbolic representation, and to address mathematical queries.
106
- - Images: A wide range of images enables the model to perform image
107
- analysis and visual data extraction tasks.
108
-
109
- The combination of these diverse data sources is crucial for training a powerful
110
- multimodal model that can handle a wide variety of different tasks and data
111
- formats.
112
-
113
- ### Data Preprocessing
114
-
115
- Here are the key data cleaning and filtering methods applied to the training
116
- data:
117
-
118
- - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
119
- was applied at multiple stages in the data preparation process to ensure
120
- the exclusion of harmful and illegal content.
121
- - Sensitive Data Filtering: As part of making Gemma pre-trained models
122
- safe and reliable, automated techniques were used to filter out certain
123
- personal information and other sensitive data from training sets.
124
- - Additional methods: Filtering based on content quality and safety in
125
- line with [our policies][safety-policies].
126
-
127
- ## Implementation Information
128
-
129
- Details about the model internals.
130
-
131
- ### Hardware
132
-
133
- Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
134
- TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
135
- computational power. TPUs, designed specifically for matrix operations common in
136
- machine learning, offer several advantages in this domain:
137
-
138
- - Performance: TPUs are specifically designed to handle the massive
139
- computations involved in training VLMs. They can speed up training
140
- considerably compared to CPUs.
141
- - Memory: TPUs often come with large amounts of high-bandwidth memory,
142
- allowing for the handling of large models and batch sizes during training.
143
- This can lead to better model quality.
144
- - Scalability: TPU Pods (large clusters of TPUs) provide a scalable
145
- solution for handling the growing complexity of large foundation models.
146
- You can distribute training across multiple TPU devices for faster and more
147
- efficient processing.
148
- - Cost-effectiveness: In many scenarios, TPUs can provide a more
149
- cost-effective solution for training large models compared to CPU-based
150
- infrastructure, especially when considering the time and resources saved
151
- due to faster training.
152
- - These advantages are aligned with
153
- [Google's commitments to operate sustainably][sustainability].
154
-
155
- ### Software
156
-
157
- Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
158
-
159
- JAX allows researchers to take advantage of the latest generation of hardware,
160
- including TPUs, for faster and more efficient training of large models. ML
161
- Pathways is Google's latest effort to build artificially intelligent systems
162
- capable of generalizing across multiple tasks. This is specially suitable for
163
- foundation models, including large language models like these ones.
164
-
165
- Together, JAX and ML Pathways are used as described in the
166
- [paper about the Gemini family of models][gemini-2-paper]; *"the 'single
167
- controller' programming model of Jax and Pathways allows a single Python
168
- process to orchestrate the entire training run, dramatically simplifying the
169
- development workflow."*
170
-
171
- ## Evaluation
172
-
173
- > [!Note]
174
- > The evaluation in this section correspond to the original checkpoint, not the QAT checkpoint.
175
- >
176
-
177
- Model evaluation metrics and results.
178
-
179
- ### Benchmark Results
180
-
181
- These models were evaluated against a large collection of different datasets and
182
- metrics to cover different aspects of text generation:
183
-
184
- #### Reasoning and factuality
185
-
186
- | Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
187
- | ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
188
- | [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
189
- | [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
190
- | [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
191
- | [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
192
- | [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
193
- | [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
194
- | [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
195
- | [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
196
- | [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
197
- | [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
198
- | [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
199
-
200
- [hellaswag]: https://arxiv.org/abs/1905.07830
201
- [boolq]: https://arxiv.org/abs/1905.10044
202
- [piqa]: https://arxiv.org/abs/1911.11641
203
- [socialiqa]: https://arxiv.org/abs/1904.09728
204
- [triviaqa]: https://arxiv.org/abs/1705.03551
205
- [naturalq]: https://github.com/google-research-datasets/natural-questions
206
- [arc]: https://arxiv.org/abs/1911.01547
207
- [winogrande]: https://arxiv.org/abs/1907.10641
208
- [bbh]: https://paperswithcode.com/dataset/bbh
209
- [drop]: https://arxiv.org/abs/1903.00161
210
-
211
- #### STEM and code
212
-
213
- | Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
214
- | ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
215
- | [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
216
- | [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
217
- | [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
218
- | [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
219
- | [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
220
- | [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
221
- | [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
222
- | [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
223
-
224
- [mmlu]: https://arxiv.org/abs/2009.03300
225
- [agieval]: https://arxiv.org/abs/2304.06364
226
- [math]: https://arxiv.org/abs/2103.03874
227
- [gsm8k]: https://arxiv.org/abs/2110.14168
228
- [gpqa]: https://arxiv.org/abs/2311.12022
229
- [mbpp]: https://arxiv.org/abs/2108.07732
230
- [humaneval]: https://arxiv.org/abs/2107.03374
231
-
232
- #### Multilingual
233
-
234
- | Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
235
- | ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
236
- | [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
237
- | [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
238
- | [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
239
- | [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
240
- | [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
241
- | [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
242
- | [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
243
-
244
- [mgsm]: https://arxiv.org/abs/2210.03057
245
- [flores]: https://arxiv.org/abs/2106.03193
246
- [xquad]: https://arxiv.org/abs/1910.11856v3
247
- [global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
248
- [wmt24pp]: https://arxiv.org/abs/2502.12404v1
249
- [eclektic]: https://arxiv.org/abs/2502.21228
250
- [indicgenbench]: https://arxiv.org/abs/2404.16816
251
-
252
- #### Multimodal
253
-
254
- | Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
255
- | ------------------------------ |:-------------:|:--------------:|:--------------:|
256
- | [COCOcap][coco-cap] | 102 | 111 | 116 |
257
- | [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
258
- | [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
259
- | [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
260
- | [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
261
- | [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
262
- | [ReMI][remi] | 27.3 | 38.5 | 44.8 |
263
- | [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
264
- | [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
265
- | [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
266
- | [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
267
- | [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
268
- | [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
269
- | [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
270
- | [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
271
-
272
- [coco-cap]: https://cocodataset.org/#home
273
- [docvqa]: https://www.docvqa.org/
274
- [info-vqa]: https://arxiv.org/abs/2104.12756
275
- [mmmu]: https://arxiv.org/abs/2311.16502
276
- [textvqa]: https://textvqa.org/
277
- [realworldqa]: https://paperswithcode.com/dataset/realworldqa
278
- [remi]: https://arxiv.org/html/2406.09175v1
279
- [ai2d]: https://allenai.org/data/diagrams
280
- [chartqa]: https://arxiv.org/abs/2203.10244
281
- [vqav2]: https://visualqa.org/index.html
282
- [blinkvqa]: https://arxiv.org/abs/2404.12390
283
- [okvqa]: https://okvqa.allenai.org/
284
- [tallyqa]: https://arxiv.org/abs/1810.12440
285
- [ss-vqa]: https://arxiv.org/abs/1908.02660
286
- [countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
287
-
288
- ## Ethics and Safety
289
-
290
- Ethics and safety evaluation approach and results.
291
-
292
- ### Evaluation Approach
293
-
294
- Our evaluation methods include structured evaluations and internal red-teaming
295
- testing of relevant content policies. Red-teaming was conducted by a number of
296
- different teams, each with different goals and human evaluation metrics. These
297
- models were evaluated against a number of different categories relevant to
298
- ethics and safety, including:
299
-
300
- - **Child Safety**: Evaluation of text-to-text and image to text prompts
301
- covering child safety policies, including child sexual abuse and
302
- exploitation.
303
- - **Content Safety:** Evaluation of text-to-text and image to text prompts
304
- covering safety policies including, harassment, violence and gore, and hate
305
- speech.
306
- - **Representational Harms**: Evaluation of text-to-text and image to text
307
- prompts covering safety policies including bias, stereotyping, and harmful
308
- associations or inaccuracies.
309
-
310
- In addition to development level evaluations, we conduct "assurance
311
- evaluations" which are our 'arms-length' internal evaluations for responsibility
312
- governance decision making. They are conducted separately from the model
313
- development team, to inform decision making about release. High level findings
314
- are fed back to the model team, but prompt sets are held-out to prevent
315
- overfitting and preserve the results' ability to inform decision making.
316
- Assurance evaluation results are reported to our Responsibility & Safety Council
317
- as part of release review.
318
-
319
- ### Evaluation Results
320
-
321
- For all areas of safety testing, we saw major improvements in the categories of
322
- child safety, content safety, and representational harms relative to previous
323
- Gemma models. All testing was conducted without safety filters to evaluate the
324
- model capabilities and behaviors. For both text-to-text and image-to-text, and
325
- across all model sizes, the model produced minimal policy violations, and showed
326
- significant improvements over previous Gemma models' performance with respect
327
- to ungrounded inferences. A limitation of our evaluations was they included only
328
- English language prompts.
329
-
330
- ## Usage and Limitations
331
-
332
- These models have certain limitations that users should be aware of.
333
-
334
- ### Intended Usage
335
-
336
- Open vision-language models (VLMs) models have a wide range of applications
337
- across various industries and domains. The following list of potential uses is
338
- not comprehensive. The purpose of this list is to provide contextual information
339
- about the possible use-cases that the model creators considered as part of model
340
- training and development.
341
-
342
- - Content Creation and Communication
343
- - Text Generation: These models can be used to generate creative text
344
- formats such as poems, scripts, code, marketing copy, and email drafts.
345
- - Chatbots and Conversational AI: Power conversational interfaces
346
- for customer service, virtual assistants, or interactive applications.
347
- - Text Summarization: Generate concise summaries of a text corpus,
348
- research papers, or reports.
349
- - Image Data Extraction: These models can be used to extract,
350
- interpret, and summarize visual data for text communications.
351
- - Research and Education
352
- - Natural Language Processing (NLP) and VLM Research: These
353
- models can serve as a foundation for researchers to experiment with VLM
354
- and NLP techniques, develop algorithms, and contribute to the
355
- advancement of the field.
356
- - Language Learning Tools: Support interactive language learning
357
- experiences, aiding in grammar correction or providing writing practice.
358
- - Knowledge Exploration: Assist researchers in exploring large
359
- bodies of text by generating summaries or answering questions about
360
- specific topics.
361
-
362
- ### Limitations
363
-
364
- - Training Data
365
- - The quality and diversity of the training data significantly
366
- influence the model's capabilities. Biases or gaps in the training data
367
- can lead to limitations in the model's responses.
368
- - The scope of the training dataset determines the subject areas
369
- the model can handle effectively.
370
- - Context and Task Complexity
371
- - Models are better at tasks that can be framed with clear
372
- prompts and instructions. Open-ended or highly complex tasks might be
373
- challenging.
374
- - A model's performance can be influenced by the amount of context
375
- provided (longer context generally leads to better outputs, up to a
376
- certain point).
377
- - Language Ambiguity and Nuance
378
- - Natural language is inherently complex. Models might struggle
379
- to grasp subtle nuances, sarcasm, or figurative language.
380
- - Factual Accuracy
381
- - Models generate responses based on information they learned
382
- from their training datasets, but they are not knowledge bases. They
383
- may generate incorrect or outdated factual statements.
384
- - Common Sense
385
- - Models rely on statistical patterns in language. They might
386
- lack the ability to apply common sense reasoning in certain situations.
387
-
388
- ### Ethical Considerations and Risks
389
-
390
- The development of vision-language models (VLMs) raises several ethical
391
- concerns. In creating an open model, we have carefully considered the following:
392
-
393
- - Bias and Fairness
394
- - VLMs trained on large-scale, real-world text and image data can
395
- reflect socio-cultural biases embedded in the training material. These
396
- models underwent careful scrutiny, input data pre-processing described
397
- and posterior evaluations reported in this card.
398
- - Misinformation and Misuse
399
- - VLMs can be misused to generate text that is false, misleading,
400
- or harmful.
401
- - Guidelines are provided for responsible use with the model, see the
402
- [Responsible Generative AI Toolkit][rai-toolkit].
403
- - Transparency and Accountability:
404
- - This model card summarizes details on the models' architecture,
405
- capabilities, limitations, and evaluation processes.
406
- - A responsibly developed open model offers the opportunity to
407
- share innovation by making VLM technology accessible to developers and
408
- researchers across the AI ecosystem.
409
-
410
- Risks identified and mitigations:
411
-
412
- - **Perpetuation of biases**: It's encouraged to perform continuous
413
- monitoring (using evaluation metrics, human review) and the exploration of
414
- de-biasing techniques during model training, fine-tuning, and other use
415
- cases.
416
- - **Generation of harmful content**: Mechanisms and guidelines for content
417
- safety are essential. Developers are encouraged to exercise caution and
418
- implement appropriate content safety safeguards based on their specific
419
- product policies and application use cases.
420
- - **Misuse for malicious purposes**: Technical limitations and developer
421
- and end-user education can help mitigate against malicious applications of
422
- VLMs. Educational resources and reporting mechanisms for users to flag
423
- misuse are provided. Prohibited uses of Gemma models are outlined in the
424
- [Gemma Prohibited Use Policy][prohibited-use].
425
- - **Privacy violations**: Models were trained on data filtered for removal
426
- of certain personal information and other sensitive data. Developers are
427
- encouraged to adhere to privacy regulations with privacy-preserving
428
- techniques.
429
-
430
- ### Benefits
431
-
432
- At the time of release, this family of models provides high-performance open
433
- vision-language model implementations designed from the ground up for
434
- responsible AI development compared to similarly sized models.
435
-
436
- Using the benchmark evaluation metrics described in this document, these models
437
- have shown to provide superior performance to other, comparably-sized open model
438
- alternatives.
439
-
440
- [g3-tech-report]: https://goo.gle/Gemma3Report
441
- [rai-toolkit]: https://ai.google.dev/responsible
442
- [kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
443
- [vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
444
- [terms]: https://ai.google.dev/gemma/terms
445
- [safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
446
- [prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
447
- [tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
448
- [sustainability]: https://sustainability.google/operating-sustainably/
449
- [jax]: https://github.com/jax-ml/jax
450
- [ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
451
- [sustainability]: https://sustainability.google/operating-sustainably/
452
- [gemini-2-paper]: https://arxiv.org/abs/2312.11805
 
1
  ---
 
2
  license: gemma
3
  tags:
4
  - gemma3
 
6
  - google
7
  pipeline_tag: image-text-to-text
8
  library_name: transformers
9
+ base_model:
10
+ - google/gemma-3-27b-it-qat-q4_0-unquantized
 
 
 
 
11
  ---
12
 
13
+ <p align="left">
14
+ <img width="65%" src="Fornax.jpg">
15
+ </p>
16
 
17
+ ### Gemma 3 27B V4 Fornax
18
 
19
+ Gemma Fornax is a distillation of the updated R1 05/28 onto Gemma 3 27B, with a particualar focus on timely and generalizable reasoning beyond coding and math.
20
+ Most other open source thinking models, especially on the smaller side, fail to generalize their reasoning to tasks other than coding or math due to an overly large focus on
21
+ GRPO zero for CoT which only generalizes for coding and math.
 
 
 
 
22
 
23
+ Instead of using GRPO, this model aims to SFT a wide variety of high quality, diverse reasoning traces from Deepseek R1 05/28 onto Gemma 3 to force the model to learn to effectively
24
+ generalize its reasoning capabilites to a large number of tasks as an extension of the LiMO paper's approach to Math/Coding CoT.
25
 
26
+ Varying CoT length in conjuction with explicit noise regularization during training also prevents the characteristic length overfitting of GRPO, which tends to manifest as waffling, where the model reasons to a set length even when it has already reached an answer.
27
 
 
 
 
 
28
 
29
+ ## Recommended Settings
30
 
31
+ Temp .7 + Nsigma 1
32
 
33
+ ## Special Thanks:
34
 
35
+ Google for open sourcing the excellent Gemma 3 model line.