--- license: apache-2.0 language: - en pipeline_tag: text-generation base_model: - huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated - huihui-ai/QwQ-32B-Preview-abliterated - huihui-ai/Sky-T1-32B-Preview-abliterated tags: - chat - abliterated - uncensored - Fusion library_name: transformers --- # huihui-ai/DeepSeekR1-QwQ-SkyT1-32B-Fusion-715 ## Overview `DeepSeekR1-QwQ-SkyT1-32B-Fusion-715` is a mixed model that combines the strengths of three powerful Qwen-based models: [huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated), [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) and [huihui-ai/Sky-T1-32B-Preview-abliterated](https://huggingface.co/huihui-ai/Sky-T1-32B-Preview-abliterated) **Although it's a simple mix, the model is usable, and no gibberish has appeared**. This is a test. I test the [80:10:10](https://huggingface.co/huihui-ai/DeepSeekR1-QwQ-SkyT1-32B-Fusion-811), [70:15:15](https://huggingface.co/huihui-ai/DeepSeekR1-QwQ-SkyT1-32B-Fusion-715) and [60:20:20](https://huggingface.co/huihui-ai/DeepSeekR1-QwQ-SkyT1-32B-Fusion-622) ratios separately to see how much impact they have on the model. ## Model Details - **Base Models:** - [huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated) (70%) - [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated) (15%) - [huihui-ai/Sky-T1-32B-Preview-abliterated](https://huggingface.co/huihui-ai/Sky-T1-32B-Preview-abliterated) (15%) - **Model Size:** 32B parameters - **Architecture:** Qwen 2.5 ## Use with ollama You can use [huihui_ai/deepseekr1-qwq-skyt1-fusion](https://ollama.com/huihui_ai/deepseekr1-qwq-skyt1-fusion) directly ``` ollama run huihui_ai/deepseekr1-qwq-skyt1-fusion:715 ```