Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
base_model:
|
4 |
+
- Alsebay/NarumashiRTS-V2
|
5 |
+
- SanjiWatsuki/Kunoichi-DPO-v2-7B
|
6 |
+
- Nitral-AI/KukulStanta-7B
|
7 |
+
tags:
|
8 |
+
- moe
|
9 |
+
- merge
|
10 |
+
- roleplay
|
11 |
+
- Roleplay
|
12 |
+
---
|
13 |
+
# What is is?
|
14 |
+
|
15 |
+
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).
|
16 |
+
|
17 |
+
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.
|
18 |
+
|
19 |
+
Better than V2 BTW.
|
20 |
+
|
21 |
+
# Recipe?
|
22 |
+
|
23 |
+
You could see base model section
|
24 |
+
|
25 |
+
# Why 3x7B?
|
26 |
+
|
27 |
+
I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.
|