For RP & story gen,
a nice fine-tuning of Gemma-2-9B could surprise you with some highly creative and authentic expressions way surpassing its size, which Gemma-3-12B even got no match.
Yet the glitches are obvious too, and hard to ignore.
As it's like breaking a perfect sentence with one word so weird
that it may totally come from another language...

Among tons of works trying to stabilize the bitch,
I enjoy grimjim/Magnolia-v3-Gemma2-8k-9B the most.
So I picked the rich recoilme/recoilme-gemma-2-9B-v0.2 plus the strong lemon07r/Gemma-2-Ataraxy-v4d-9B to tame it with one last merge.
And failed again...

It's just slightly smarter, more sensitive to NSFW directions with a little rebellious tendency.
So keep retrying and editing.
It's 9B, after all.
"It feels few steps to perfection, 'cause it's google."
models:
  - model: grimjim/Magnolia-v3-Gemma2-8k-9B
  - model: recoilme/recoilme-gemma-2-9B-v0.2
    parameters:
      density: [0.5, 0.7, 0.6, 0.7, 0.5]
      epsilon: [0.05, 0.07, 0.06, 0.07, 0.05]
      weight: [-0.01150, 0.01793, -0.01034, 0.01855, -0.01876]
  - model: lemon07r/Gemma-2-Ataraxy-v4d-9B
    parameters:
      density: [0.5, 0.3, 0.4, 0.3, 0.5]
      epsilon: [0.05, 0.03, 0.04, 0.03, 0.05]
      weight: [0.01763, -0.01992, 0.01975, -0.01096, 0.01951]
merge_method: della
base_model: grimjim/Magnolia-v3-Gemma2-8k-9B
parameters:
    normalize: false
    lambda: 0.66
tokenizer_source: base
dtype: bfloat16
Downloads last month
7
Safetensors
Model size
10.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AIgotahole/Gewwa-2-9B-wtf

Finetuned
(1)
this model
Quantizations
2 models

Collection including AIgotahole/Gewwa-2-9B-wtf