Triangle104 commited on
Commit
cb73f01
·
verified ·
1 Parent(s): 1df1c4f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-23B-A4B-abli
15
  Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
16
 
17
  ---
18
- uihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2
19
  base model. It enhances the standard Transformer architecture by
20
  replacing MLP layers with MoE layers, each containing 8 experts, to
21
  achieve high performance with efficient inference. The model is designed
 
15
  Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-23B-A4B-abliterated) for more details on the model.
16
 
17
  ---
18
+ Huihui-MoE-23B-A4B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Huihui-Qwen3-4B-abliterated-v2
19
  base model. It enhances the standard Transformer architecture by
20
  replacing MLP layers with MoE layers, each containing 8 experts, to
21
  achieve high performance with efficient inference. The model is designed