Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,11 @@ tags:
|
|
9 |
|
10 |
A Experimental MoE Model that custom for all-rounded Roleplay. Well understand Character Card and high logic.
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
13 |
|
14 |
This is the following error line that I got when try loading GGUF version of this model:
|
15 |
|
@@ -19,5 +23,8 @@ Have try from Q2 to fp16, no hope. 😥
|
|
19 |
|
20 |
Link here: https://huggingface.co/Alsebay/HyouKan-GGUF
|
21 |
|
|
|
|
|
22 |
[mradermacher](https://huggingface.co/mradermacher) version, he do all the rest of quantization model, and also imatrix: https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/
|
|
|
23 |
# Is this model good? Want more dicussion? Let's me know in community tab! ヾ(≧▽≦*)o
|
|
|
9 |
|
10 |
A Experimental MoE Model that custom for all-rounded Roleplay. Well understand Character Card and high logic.
|
11 |
|
12 |
+
If you want 32k context length capable, you could try those versions:
|
13 |
+
- [V2](https://huggingface.co/mradermacher/HyouKan-3x7B-V2-32k)
|
14 |
+
- [V2.1](https://huggingface.co/mradermacher/HyouKan-3x7B-V2.1-32k)
|
15 |
+
|
16 |
+
# It's ridiculous that I can run this original version in 4bit, but can't run in GGUF version. Maybe my GPU can't handle it?
|
17 |
|
18 |
This is the following error line that I got when try loading GGUF version of this model:
|
19 |
|
|
|
23 |
|
24 |
Link here: https://huggingface.co/Alsebay/HyouKan-GGUF
|
25 |
|
26 |
+
# Thank [mradermacher](https://huggingface.co/mradermacher) for quantizing again my model.
|
27 |
+
|
28 |
[mradermacher](https://huggingface.co/mradermacher) version, he do all the rest of quantization model, and also imatrix: https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/
|
29 |
+
|
30 |
# Is this model good? Want more dicussion? Let's me know in community tab! ヾ(≧▽≦*)o
|