NaruMOE-v1-3x7B / README.md
Alsebay's picture
Update README.md
c0b9c5a verified
|
raw
history blame contribute delete
No virus
700 Bytes
metadata
license: cc-by-nc-4.0
base_model:
  - Alsebay/NarumashiRTS-V2
  - SanjiWatsuki/Kunoichi-DPO-v2-7B
  - Nitral-AI/KukulStanta-7B
tags:
  - moe
  - merge
  - roleplay
  - Roleplay

What is is?

A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).

Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.

Better than V2 BTW.

GGUF Version?

Here

Recipe?

You could see base model section

Why 3x7B?

I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.