
Coder and Programming Models - MOE, Reg, Imatrix.
Models (0.8B to 87B) in regular, "reasoning", "Brainstorm", MOE (1x to 8x / 128 experts), and expanded to create better and stronger code, faster.
Text Generation • 39B • Updated • 8.14k • 1Note Repo with multiple (41) Coding models in 1 or 2 quants ; many of these models now have full repos and full quants - listed below. LISTINGS ORDER OF THIS COLLECTION: MOES, in terms of raw power / size. Brainstorm - An Adapter by DavidAU Standard Models in terms of raw power/size. QUANTS: I strongly suggest for complex coding / long coding projects you use the highest quant(s) you can in both Imatrix and reg; with Imatrix being preferred. Likewise; higher parameter count models AND/OR MOEs.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4-256k-ctx
Text Generation • 53B • Updated • 10Note 128 experts MOE Model , with 256 k context. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4-128k
Text Generation • 53B • Updated • 124 • 1Note 128 experts MOE Model , with 128 k context. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen3-53B-A3B-TOTAL-RECALL-MASTER-CODER-v1.4
Text Generation • 53B • Updated • 63 • 3Note 128 experts MOE Model. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter (40x) by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-v1.2
Text Generation • 87B • Updated • 14Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X32B-CoderInstruct-OlympicCoder-87B-v1.1
Text Generation • 87B • Updated • 57 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf
Text Generation • 44B • Updated • 924Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x24B-MOE-Power-Magistral-Devstral-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 94Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B
Text Generation • 44B • Updated • 21Note Devstral (coder) with Reasoning, which can be turned on or off. 128 k context.
DavidAU/Mistral-2x22B-MOE-Power-Codestral-Ultimate-39B
Text Generation • 39B • Updated • 19 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B-128k-ctx
Text Generation • 53B • Updated • 7 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context.
DavidAU/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B
Text Generation • 53B • Updated • 15Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-6x7B-Six-Pack-Coder-Instruct-42B-128k-ctx
Text Generation • 42B • Updated • 7Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context.
DavidAU/Qwen2.5-6x7B-Six-Pack-Coder-Instruct-42B
Text Generation • 42B • Updated • 10 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-4x7B-Quad-Coder-Instruct-30B-128k-ctx
Text Generation • 30B • Updated • 7Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128 k context.
DavidAU/Qwen2.5-4x7B-Quad-Coder-Instruct-30B
Text Generation • 30B • Updated • 15 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1-128k-ctx
Text Generation • 25B • Updated • 8 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context.
DavidAU/Qwen2.5-3X7B-CoderInstruct-OlympicCoder-MS-Next-Coder-25B-v1
Text Generation • 25B • Updated • 12 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 13 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-CodeV-R1-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 13 • 1Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-VisCoder-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 10Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X7B-Coder-Soar-qwen-Coder-Instruct-OlympicCoder-19B
Text Generation • 19B • Updated • 13 • 1Note Specialized 2 model with MOE with additional shared expert. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-V2-28B
Text Generation • 28B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-28B-gguf
Text Generation • 28B • Updated • 181
DavidAU/Qwen2.5-2X11B-CODER-Dueling-Wolverines-28B
Text Generation • 28B • Updated • 4Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-Godzilla-Coder-51B-128k
Text Generation • 51B • Updated • 18 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k.
DavidAU/Qwen2.5-Godzilla-Coder-51B-gguf
Text Generation • 51B • Updated • 1.36k • 3
DavidAU/Qwen2.5-Godzilla-Coder-51B
Text Generation • 51B • Updated • 50 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-Godzilla-Coder-V2-51B-128k
Text Generation • 51B • Updated • 19 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k.
DavidAU/Qwen2.5-Godzilla-Coder-V2-51B
Text Generation • 51B • Updated • 22Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm40x-44B
Text Generation • 44B • UpdatedNote Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2507-CODER-Brainstorm20x-34B
Text Generation • 34B • Updated • 9Note Newest Devstral version, with even better coding abilities. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm40x-44B
Text Generation • 44B • Updated • 8 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Mistral-Devstral-2505-CODER-Brainstorm20x-34B
Text Generation • 34B • Updated • 10Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-42B
Text Generation • 42B • Updated • 9Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-42B
Text Generation • 42B • Updated • 10Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-20B
Text Generation • 20B • Updated • 11Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-20B
Text Generation • 20B • Updated • 15Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-128k-ctx-12B
Text Generation • 12B • Updated • 20 • 3Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. Context at 128k. Uses Brainstorm adapter by DavidAU to extend model function/performance.
DavidAU/Qwen2.5-Microsoft-NextCoder-Brainstorm20x-12B
Text Generation • 12B • Updated • 24 • 1Note Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen3-Jan-Nano-128k-6B-Brainstorm20x
Text Generation • 6B • UpdatedNote Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here. This is a general use AND coder/programming model.
DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • UpdatedNote Uses Brainstorm adapter by DavidAU to extend model function/performance. Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-Wolverine-CODER-11B-128k-ctx
Text Generation • 11B • Updated • 6Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context.
DavidAU/Qwen2.5-Wolverine-CODER-11B-gguf
Text Generation • 11B • Updated • 1.1k • 2
DavidAU/Qwen2.5-Wolverine-CODER-11B
Text Generation • 11B • Updated • 12Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-Wolverine-CODER-11B-V2-128k-ctx
Text Generation • 11B • Updated • 7Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 128k context.
DavidAU/Qwen2.5-Wolverine-CODER-11B-V2
Text Generation • 11B • Updated • 7 • 1Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here.
DavidAU/Qwen2.5-OpenCodeReasoning-Nemotron-1.1-7B-NEO-imatix-gguf
Text Generation • 8B • Updated • 1.11kNote Uses NEO Imatrix dataset (by DavidAU) to augment model performance.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 1.22k • 4Note Uses NEO Imatrix dataset (by DavidAU) to augment model performance. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too.
DavidAU/Qwen3-Zero-Coder-Reasoning-0.8B
Text Generation • 0.8B • Updated • 28Note Links to quants on this page -> GGUF, GGUF Imatrix and others are here. 40k context. Good for draft, simple, or code blocks - including complex. Model has full thinking/reasoning too.