Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,10 @@ tags:
|
|
14 |
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
|
16 |
|
|
|
|
|
|
|
|
|
17 |
## Use with llama.cpp
|
18 |
Install llama.cpp through brew (works on Mac and Linux)
|
19 |
|
|
|
14 |
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
|
16 |
|
17 |
+
---
|
18 |
+
Huihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2 base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 8 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
|
19 |
+
|
20 |
+
---
|
21 |
## Use with llama.cpp
|
22 |
Install llama.cpp through brew (works on Mac and Linux)
|
23 |
|