
Thinking / Reasoning Models - Reg and MOEs.
QwQ,DeepSeek, EXONE, DeepHermes, and others "thinking/reasoning" AIs / LLMs in regular model type, MOE (mix of experts), and Hybrid model formats.
- 35B • Updated • 9.53k • 61
DavidAU/Qwen3-30B-A6B-16-Extreme
Text Generation • 31B • Updated • 822 • 55DavidAU/Reka-Flash-3-21B-Reasoning-Uncensored-MAX-NEO-Imatrix-GGUF
Text Generation • 21B • Updated • 1.73k • 51DavidAU/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-gguf
Text Generation • 17B • Updated • 513 • 23DavidAU/Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf
Text Generation • 31B • Updated • 12.8k • 23
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-GGUF
Text Generation • 25B • Updated • 1.53k • 17Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Mistral-Grand-R1-Dolphin-3.0-Deep-Reasoning-Brainstorm-45B-GGUF
Text Generation • 45B • Updated • 218 • 11DavidAU/DeepSeek-V2-Grand-Horror-SMB-R1-Distill-Llama-3.1-Uncensored-16.5B-GGUF
Text Generation • 17B • Updated • 491 • 12DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-gguf
Text Generation • 14B • Updated • 736 • 10DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
Text Generation • 8B • Updated • 1.04k • 15DavidAU/DeepSeek-BlackRoot-R1-Distill-Llama-3.1-8B-GGUF
Text Generation • 8B • Updated • 162 • 9DavidAU/DeepSeek-Grand-Horror-SMB-R1-Distill-Llama-3.1-16B-GGUF
Text Generation • 16B • Updated • 230 • 11
DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-gguf
Text Generation • 18B • Updated • 138 • 8Note MOE - Mixture of Experts version. This model has 8 times the power of a standard 3B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-gguf
35B • Updated • 54 • 5DavidAU/Llama-3.1-DeepSeek-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
Text Generation • 8B • Updated • 413 • 6
DavidAU/Qwen2.5-MOE-6x1.5B-DeepSeek-Reasoning-e32-8.71B-gguf
Text Generation • 9B • Updated • 30 • 5Note MOE - Mixture of Experts version. This model has 6 times the power of a standard 1.5B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf
Text Generation • 4B • Updated • 1.15k • 6Note MOE - Mixture of Experts version. This model has 2 times the power of a standard 1.5B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-gguf
Text Generation • 19B • Updated • 224 • 4Note MOE - Mixture of Experts version. This model has 2 times the power of a standard 7B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Mad-Scientist-24B-GGUF
Text Generation • 25B • Updated • 40 • 3Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepHermes-3-Llama-3-8B-Preview-16.5B-Brainstorm-gguf
Text Generation • 17B • Updated • 47 • 3DavidAU/DeepSeek-R1-Distill-Qwen-25.5B-Brainstorm-gguf
Text Generation • 26B • Updated • 145 • 3
DavidAU/Deep-Reasoning-Llama-3.2-10pack-f16-gguf
Text Generation • 3B • Updated • 65 • 1Note Links to all 10 models in GGUF (regular and Imatrix) format also on this page.
DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-gguf
Text Generation • 14B • Updated • 23 • 1DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B
Text Generation • 3B • Updated • 46 • 1DavidAU/Deep-Reasoning-Llama-3.2-JametMini-3B-MK.III
Text Generation • 3B • Updated • 4 • 1DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B
Text Generation • 3B • Updated • 8 • 2DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B
Text Generation • 3B • Updated • 832 • 1DavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-NEO-Imatrix-GGUF
Text Generation • 3B • Updated • 306 • 3DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B
Text Generation • 3B • Updated • 7 • 1DavidAU/Mistral-Grand-R1-Dolphin-3.0-Deep-Reasoning-Brainstorm-45B
Text Generation • 45B • Updated • 6 • 2DavidAU/Deep-Reasoning-Llama-3.2-COT-3B
Text Generation • 3B • Updated • 6DavidAU/Deep-Reasoning-Llama-3.2-Dolphin3.0-3B
Text Generation • 3B • Updated • 7DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B
Text Generation • 3B • Updated • 7DavidAU/Deep-Reasoning-Llama-3.2-ShiningValiant2-3B
Text Generation • 3B • Updated • 7DavidAU/Deep-Reasoning-Llama-3.2-BlackSheep-3B
Text Generation • 3B • Updated • 6 • 1DavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-HORROR-Imatrix-GGUF
Text Generation • 3B • Updated • 621 • 1DavidAU/EXAONE-Deep-2.4B-Reasoning-MAX-NEO-Imatrix-GGUF
Text Generation • 3B • Updated • 65 • 3DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-GGUF
Text Generation • 8B • Updated • 159 • 1DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-Horror-Imatrix-MAX-8B-GGUF
Text Generation • 8B • Updated • 236 • 3DavidAU/L3.1-Evil-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-GGUF
Text Generation • 8B • Updated • 213
DavidAU/L3.1-MOE-6X8B-Dark-Reasoning-Dantes-Peak-Hermes-R1-Uncensored-36B
Text Generation • 36B • Updated • 980Note MOE - Mixture of Experts version. This model has 6 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-MOE-4X8B-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-25B-GGUF
Text Generation • 25B • Updated • 330Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
mradermacher/L3.1-MOE-4X8B-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-e32-25B-i1-GGUF
25B • Updated • 377 • 1Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Imatrix GGUF Quant version of my model by Team "mradermacher".
DavidAU/L3.1-MOE-4X8B-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-e32-25B-GGUF
Text Generation • 25B • Updated • 66Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
mradermacher/L3.1-MOE-4X8B-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-25B-i1-GGUF
25B • Updated • 361 • 1Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Imatrix GGUF Quant version of my model by Team "mradermacher".
mradermacher/L3.1-Evil-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF
8B • Updated • 566 • 2Note Imatrix GGUF Quant version of my model by Team "mradermacher".
mradermacher/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF
8B • Updated • 303 • 1Note Imatrix GGUF Quant version of my model by Team "mradermacher".
DavidAU/L3.1-Dark-Reasoning-Halu-Blackroot-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 6 • 1
DavidAU/L3.1-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 13 • 3Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Jamet-8B-MK.I-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 6 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Anjir-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 9 • 2Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Celeste-V1.2-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 4 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them
Text Generation • Updated • 9DavidAU/L3.1-MOE-6X8B-Dark-Reasoning-Dantes-Peak-HORROR-R1-Uncensored-36B-GGUF
Text Generation • 36B • Updated • 739 • 3DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-Deep-Reasoning-32B-GGUF
Text Generation • 25B • Updated • 358 • 7DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-GGUF
Text Generation • 8B • Updated • 271 • 1DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-COGITO-Deep-Reasoning-32B-GGUF
Text Generation • 25B • Updated • 342 • 3DavidAU/Qwen3-0.6B-NEO-Imatrix-Max-GGUF
Text Generation • 0.8B • Updated • 148DavidAU/Qwen3-0.6B-HORROR-Imatrix-Max-GGUF
Text Generation • 0.8B • Updated • 72DavidAU/Qwen3-1.7B-HORROR-Imatrix-Max-GGUF
Text Generation • 2B • Updated • 95 • 1DavidAU/Qwen3-1.7B-NEO-Imatrix-Max-GGUF
Text Generation • 2B • Updated • 170 • 1DavidAU/Qwen3-4B-HORROR-Imatrix-Max-GGUF
Text Generation • 4B • Updated • 64DavidAU/Qwen3-4B-NEO-Imatrix-Max-GGUF
Text Generation • 4B • Updated • 147 • 5DavidAU/Qwen3-8B-HORROR-Imatrix-Max-GGUF
Text Generation • 8B • Updated • 50DavidAU/Qwen3-8B-NEO-Imatrix-Max-GGUF
Text Generation • 8B • Updated • 42 • 1DavidAU/Qwen3-4B-Q8_0-64k-128k-256k-context-GGUF
Text Generation • 4B • Updated • 169 • 3DavidAU/Qwen3-14B-HORROR-Imatrix-Max-GGUF
Text Generation • 15B • Updated • 35 • 3DavidAU/Qwen3-14B-NEO-Imatrix-Max-GGUF
Text Generation • 15B • Updated • 50DavidAU/Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF
Text Generation • 8B • Updated • 44DavidAU/Qwen3-4B-Mishima-Imatrix-GGUF
Text Generation • 4B • Updated • 7 • 3DavidAU/Qwen3-32B-128k-HORROR-Imatrix-Max-GGUF
Text Generation • 33B • Updated • 53 • 2DavidAU/Qwen3-32B-128k-NEO-Imatrix-Max-GGUF
Text Generation • 33B • Updated • 45 • 2DavidAU/Qwen3-30B-A4.5B-12-Cooks
Text Generation • 31B • Updated • 6 • 5DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context
Text Generation • 31B • Updated • 13 • 8DavidAU/Qwen3-8B-256k-Context-8X-Grand
Text Generation • 8B • Updated • 20DavidAU/Qwen3-8B-192k-Context-6X-Larger
Text Generation • 8B • Updated • 13DavidAU/Qwen3-8B-128k-Context-4X-Large
Text Generation • 8B • Updated • 13DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus
Text Generation • 8B • Updated • 10DavidAU/Qwen3-8B-64k-Context-2X-Medium
Text Generation • 8B • Updated • 11 • 1DavidAU/Qwen3-8B-320k-Context-10X-Massive
Text Generation • 8B • Updated • 43DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
Text Generation • 8B • Updated • 1.37k • 3DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-NEO-Max-GGUF
Text Generation • 8B • Updated • 568 • 6DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-HORROR-Max-GGUF
Text Generation • 8B • Updated • 69 • 6DavidAU/Qwen3-8B-192k-Josiefied-Uncensored-NEO-Max-GGUF
Text Generation • 8B • Updated • 1.98k • 22DavidAU/Qwen3-8B-192k-Josiefied-Uncensored-HORROR-Max-GGUF
Text Generation • 8B • Updated • 36 • 2DavidAU/Qwen3-30B-A1.5B-64K-High-Speed-NEO-Imatrix-MAX-gguf
Text Generation • 31B • Updated • 838 • 13DavidAU/Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
Text Generation • 18B • Updated • 2.65k • 8DavidAU/Llama-3.2-8X3B-GATED-MOE-NEO-Reasoning-Dark-Champion-uncensored-18.4B-IMAT-GGUF
Text Generation • 18B • Updated • 1.51k • 6DavidAU/Llama-3.2-8X3B-GATED-MOE-Horror-Reasoning-Dark-Champion-uncensored-18.4B-IMAT-GGUF
Text Generation • 18B • Updated • 1.13k • 2
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated • 128Note Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now.
DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE
Text Generation • Updated • 57Note SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too.
DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-GGUF
Text Generation • 22B • Updated • 480 • 9DavidAU/Qwen3-The-Xiaolong-Omega-Directive-22B-uncensored-abliterated-GGUF
Text Generation • 22B • Updated • 339 • 3DavidAU/Qwen3-The-Xiaolong-Josiefied-Omega-Directive-22B-uncensored-abliterated-GGUF
Text Generation • 22B • Updated • 852 • 10DavidAU/Magistral-Small-2506-Reasoning-24B-NEO-MAX-Imatrix-GGUF
Text Generation • 24B • Updated • 817 • 3DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
Text Generation • 33B • Updated • 979 • 7DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-GGUF
Text Generation • 17B • Updated • 729 • 1DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF
Text Generation • 17B • Updated • 3.68k • 9DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-Abliterated-Uncensored
Text Generation • 33B • Updated • 9 • 1DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-II-Instruct-2506
Text Generation • 46B • Updated • 24 • 5DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-128k
Text Generation • 33B • Updated • 9DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-II-Instruct-2506-GGUF
Text Generation • 45B • Updated • 1.3k • 2DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-Instruct-2506-GGUF
Text Generation • 45B • Updated • 281DavidAU/Qwen2.5-OpenCodeReasoning-Nemotron-1.1-7B-NEO-imatix-gguf
Text Generation • 8B • Updated • 1.12kDavidAU/Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf
Text Generation • 44B • Updated • 5.78kDavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 13.5k • 11DavidAU/Qwen3-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 16DavidAU/Qwen3-Polaris-Preview-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 16 • 1DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 19 • 1DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x
Text Generation • 6B • Updated • 15DavidAU/Qwen3-Instruct-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 38 • 1DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 77 • 1DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 22DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 16DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32
Text Generation • 6B • Updated • 12 • 1DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-128k-ctx
Text Generation • 6B • Updated • 15DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-192k-ctx
Text Generation • 6B • Updated • 14DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 40DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x-128k-ctx
Text Generation • 12B • Updated • 116DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 128DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 3.25k • 1DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x
Text Generation • 21B • Updated • 18 • 2DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32
Text Generation • 2B • Updated • 14DavidAU/Qwen3-Shining-Lucy-CODER-2.4B
Text Generation • 2B • Updated • 20DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-mix2
Text Generation • 2B • Updated • 13DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2
Text Generation • 2B • Updated • 13DavidAU/Qwen3-Shining-Lucy-CODER-3.4B-Brainstorm20x-e32
Text Generation • 3B • Updated • 8DavidAU/Qwen3-Shining-Valiant-Instruct-CODER-Reasoning-2.7B
Text Generation • 3B • Updated • 17DavidAU/Qwen3-Shining-Valiant-Instruct-Fast-CODER-Reasoning-2.4B
Text Generation • 2B • Updated • 33 • 1DavidAU/Mistral-Magistral-Devstral-Instruct-FUSED-CODER-Reasoning-36B
Text Generation • 36B • Updated • 11DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x-128k-ctx
Text Generation • 21B • Updated • 12DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 28 • 3DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Text Generation • 21B • Updated • 2.71k • 2DavidAU/Openai_gpt-oss-20b-NEO-GGUF
Text Generation • 21B • Updated • 5.05k • 8DavidAU/Openai_gpt-oss-120b-NEO-Imatrix-GGUF
Text Generation • 117B • Updated • 5.19kDavidAU/OpenAi-GPT-oss-20b-abliterated-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 3.11k • 11