For RP & story gen,
fine-tunings of Mistral-Nemo-12B ignite the fire, setting the golden standard between strategy & efficiency, leaving players with confidence over entertainment.
It's blunt, showcasing the core of datasets in various manners.
Within its power range everything is brilliant;
out of it, absolute mess...

I tried so much of that,
enjoying both the wild BarBarickoza/Dans-SakuraKaze-Picaro-12b and the cool Nitral-AI/Nera_Noctis-12B,
reckoning a classic Nohobby/MN-12B-Siskin-v0.2 plus an avant grimjim/FranFran-Something-12B could make a sexy hybrid.
And it smells yummy indeed.

Now the potentiality is deeper with more restrained sanity touching all burning boundaries.
Each retry bleeds.
Don't dose over 12B.
"This works so well that this doesn't matter at all."
models:
  - model: BarBarickoza/Dans-SakuraKaze-Picaro-12b
  - model: Nohobby/MN-12B-Siskin-v0.2
  - model: grimjim/FranFran-Something-12B
  - model: Nitral-AI/Nera_Noctis-12B
merge_method: karcher
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
Downloads last month
10
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AIgotahole/Minor-Repo-12B-omg

Collection including AIgotahole/Minor-Repo-12B-omg