File size: 700 Bytes
af2757d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0b9c5a
 
af2757d
 
 
 
 
 
c0b9c5a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: cc-by-nc-4.0
base_model:
- Alsebay/NarumashiRTS-V2
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Nitral-AI/KukulStanta-7B
tags:
- moe
- merge
- roleplay
- Roleplay
---
# What is is?

A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).

Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.

Better than V2 BTW.

# GGUF Version?
[Here](https://huggingface.co/Alsebay/NaruMOE-3x7B-v1-GGUF/)
# Recipe?

You could see base model section

# Why 3x7B?

I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.