---
base_model:
- google/gemma-3-27b-it
- google/gemma-3-27b-pt
- Columbidae/gemma-3-27b-half
library_name: transformers
tags:
- mergekit
- merge
---
# ✨G3 Glitter 27B✨
A creative writing model based on Gemma 3 27B.
[Columbidae/gemma-3-27b-half](https://huggingface.co/Columbidae/gemma-3-27b-half), a 50/50 merge of 27B IT and 27B PT, was used as the base model. (This was done because of the success of [Starshine](https://huggingface.co/ToastyPigeon/Gemma-3-Starshine-12B), a 50/50 IT and PT merge.)
The inclusion of PT model does weaken the instruct, but it also weakens the censorship/hesitancy to participate in certain fictional stories. The prose also becomes more natural with less of the IT model included.
**This model does better with short and to-the-point prompts. Long, detailed system prompts will often confuse it.** (Tested with 1000-2000 token system prompts to lackluster results compared to 100-500 token prompts).
## Instruct Format
Uses Gemma2/3 instruct and context. Like Glitter 12b, this works well with `temp = 1, top-nsigma = 1.5`.
```
user
{User messages; can also put sysprompt here to use the built-in g3 training}model
{model response}
```