|
--- |
|
base_model: |
|
- Sao10K/Frostwind-v2.1-m7 |
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
- macadeliccc/WestLake-7B-v2-laser-truthy-dpo |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- moe |
|
- merge |
|
- Roleplay |
|
--- |
|
> [!IMPORTANT] |
|
> This model may buggy. Final version is realease. Thank you for using this model. |
|
> |
|
> For TextGenWebUI user only: Use Transformers to infer/use for all version of this series. I have just known that there is a issue in llamaccp in TextGenWebUI. Please don't use the GGUF version because it have some buggy pointer. |
|
# Experimental 3x7B model |
|
|
|
A Experimental MoE Model that custom for all-rounded Roleplay. Well understand Character Card and high logic. |
|
|
|
Thank all the origin model author: Sao10K, SanjiWatsuki, macadeliccc, for create those model. Pardon me that I want hide the recipe. :( |
|
|
|
If you want 32k context length capable, you could try those versions: |
|
|
|
- [V2](https://huggingface.co/Alsebay/HyouKan-3x7B-V2-32k) |
|
- [V2.1](https://huggingface.co/Alsebay/HyouKan-3x7B-V2.1-32k) |
|
|
|
Other version: lower the expert to 2: |
|
|
|
- https://huggingface.co/Alsebay/Hyou-2x7B |
|
|
|
# You may want see this: https://huggingface.co/Alsebay/My_LLMs_Leaderboard |
|
# It's ridiculous that I can run this original version in 4bit, but can't run in GGUF version. Maybe my GPU can't handle it? |
|
|
|
Have try from Q2 to fp16, no hope. 😥 Seem that there is a bug in model pointer. maybe cause by Sao10K/Frostwind-v2.1-m7 because it a experimental model. |
|
|
|
Link here: https://huggingface.co/Alsebay/HyouKan-GGUF |
|
|
|
# Thank [mradermacher](https://huggingface.co/mradermacher) for quantizing again my model. |
|
|
|
[mradermacher](https://huggingface.co/mradermacher) version, he do all the rest of quantization model, and also imatrix: https://huggingface.co/mradermacher/HyouKan-3x7B-GGUF/ |
|
|
|
# Is this model good? Want more dicussion? Let's me know in community tab! ヾ(≧▽≦*)o |