Minor-Repo-12B-omg / README.md
AIgotahole's picture
Update README.md
74d6d1c verified
metadata
base_model:
  - grimjim/FranFran-Something-12B
  - Nitral-AI/Nera_Noctis-12B
  - Nohobby/MN-12B-Siskin-v0.2
  - BarBarickoza/Dans-SakuraKaze-Picaro-12b
library_name: transformers
tags:
  - mergekit
  - merge
  - roleplay
  - story-writing
  - adventure
  - gemma-2
  - rp
  - nsfw
language:
  - en
  - zh
  - ja
  - fr
  - ko
  - de
  - ru
  - es
  - pt
For RP & story gen,
fine-tunings of Mistral-Nemo-12B ignite the fire, setting the golden standard between strategy & efficiency, leaving players with confidence over entertainment.
It's blunt, showcasing the core of datasets in various manners.
Within its power range everything is brilliant;
out of it, absolute mess...

I tried so much of that,
enjoying both the wild BarBarickoza/Dans-SakuraKaze-Picaro-12b and the cool Nitral-AI/Nera_Noctis-12B,
reckoning a classic Nohobby/MN-12B-Siskin-v0.2 plus an avant grimjim/FranFran-Something-12B could make a sexy hybrid.
And it smells yummy indeed.

Now the potentiality is deeper with more restrained sanity touching all burning boundaries.
Each retry bleeds.
Don't dose over 12B.
"This works so well that this doesn't matter at all."
models:
  - model: BarBarickoza/Dans-SakuraKaze-Picaro-12b
  - model: Nohobby/MN-12B-Siskin-v0.2
  - model: grimjim/FranFran-Something-12B
  - model: Nitral-AI/Nera_Noctis-12B
merge_method: karcher
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
tokenizer_source: base
dtype: float32
out_dtype: bfloat16