Mistral-2x24B-MOE-Power-Magistral-Devstral-Reasoning-Ultimate-44B

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.

TWO monster coders (Mistral's Magistral 24b AND Devstral 24B) in MOE (Mixture of Experts) 2x24B configuration with full reasoning (can be turned on/off).

The two best Mistral Coders at 24B in one that are stronger than the sum of their parts.

Both models code together, with Magistral in "charge" using Devstral's coding power.

Full reasoning/thinking which can be turned on or off.

If you want a version where "Devstral" is in charge (but still have reasoning on/off + Magistral) see this repo:

https://huggingface.co/DavidAU/Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B

Info on each model below, info on MOE model / settings etc below this.

GGUF (enhanced, NEO Imatrix:)

https://huggingface.co/DavidAU/Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-44B-gguf


Devstral Small 1.0


Devstral is an agentic LLM for software engineering tasks built under a collaboration between Mistral AI and All Hands AI 🙌. Devstral excels at using tools to explore codebases, editing multiple files and power software engineering agents. The model achieves remarkable performance on SWE-bench which positionates it as the #1 open source model on this benchmark.

It is finetuned from Mistral-Small-3.1, therefore it has a long context window of up to 128k tokens. As a coding agent, Devstral is text-only and before fine-tuning from Mistral-Small-3.1 the vision encoder was removed.

For enterprises requiring specialized capabilities (increased context, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community.

Learn more about Devstral in our blog post.

Key Features:

  • Agentic coding: Devstral is designed to excel at agentic coding tasks, making it a great choice for software engineering agents.
  • lightweight: with its compact size of just 24 billion parameters, Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an appropriate model for local deployment and on-device use.
  • Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
  • Context Window: A 128k context window.
  • Tokenizer: Utilizes a Tekken tokenizer with a 131k vocabulary size.

Benchmark Results

SWE-Bench

Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA by 6%.

Model Scaffold SWE-Bench Verified (%)
Devstral OpenHands Scaffold 46.8
GPT-4.1-mini OpenAI Scaffold 23.6
Claude 3.5 Haiku Anthropic Scaffold 40.6
SWE-smith-LM 32B SWE-agent Scaffold 40.2

For additional settings, usage information, benchmarks etc also see:

https://huggingface.co/mistralai/Devstral-Small-2505


Model Card for Magistral-Small-2506


Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.

Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.

Learn more about Magistral in our blog post.

The model was presented in the paper Magistral.

Key Features

  • Reasoning: Capable of long chains of reasoning traces before providing an answer.
  • Multilingual: Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
  • Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
  • Context Window: A 128k context window, but performance might degrade past 40k. Hence we recommend setting the maximum model length to 40k.

Benchmark Results

Model AIME24 pass@1 AIME25 pass@1 GPQA Diamond Livecodebench (v5)
Magistral Medium 73.59% 64.95% 70.83% 59.36%
Magistral Small 70.68% 62.76% 68.18% 55.84%

Sampling parameters

Please make sure to use:

  • top_p: 0.95
  • temperature: 0.7
  • max_tokens: 40960

Basic Chat Template

We highly recommend including the default system prompt used during RL for the best results, you can edit and customise it if needed for your specific use case.

<s>[SYSTEM_PROMPT]system_prompt

A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown to format your response. Write both your thoughts and summary in the same language as the task posed by the user. NEVER use \boxed{} in your response.

Your thinking process must follow the template below:
<think>
Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
</think>

Here, provide a concise summary that reflects your reasoning and presents a clear final answer to the user. Don't mention that this is a summary.

Problem:

[/SYSTEM_PROMPT][INST]user_message[/INST]<think>
reasoning_traces
</think>
assistant_response</s>[INST]user_message[/INST]

system_prompt, user_message and assistant_response are placeholders.

We invite you to choose, depending on your use case and requirements, between keeping reasoning traces during multi-turn interactions or keeping only the final assistant response.

For additional settings, usage information, benchmarks etc also see:

https://huggingface.co/mistralai/Magistral-Small-2506


Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B

SETTINGS


Max context is 128k/131072 ; for reasoning strongly suggest min 8k context window, if reasoning is on.

REASONING SYSTEM PROMPT (optional):

A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown and Latex to format your response. Write both your thoughts and summary in the same language as the task posed by the user.

Your thinking process must follow the template below:
<think>
Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
</think>

GENERAL:

All versions have default of 2 experts activated.

Number of active experts can be adjusted in Lmstudio and other AI Apps.

Suggest 2-4 generations, especially if using 1 expert (all models).

Models will accept "simple prompt" as well as very detailed instructions ; however for larger projects I suggest using Q6/Q8 quants / optimized quants.

Suggested Settings :

  • Temp .5 to .7 (or lower)
  • topk: 20, topp: .8, minp: .05
  • rep pen: 1.1 (can be lower)
  • Jinja Template (embedded) or CHATML template.
  • A System Prompt is not required. (ran tests with blank system prompt)

For additional settings, usage information, benchmarks etc also see:

https://huggingface.co/mistralai/Devstral-Small-2505

and/or

https://huggingface.co/mistralai/Magistral-Small-2506


For more information / other Qwen/Mistral Coders / additional settings see:


[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]


Help, Adjustments, Samplers, Parameters and More


CHANGE THE NUMBER OF ACTIVE EXPERTS:

See this document:

https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts

Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Downloads last month
92
Safetensors
Model size
43.7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/Mistral-2x24B-MOE-Power-Magistral-Devstral-Reasoning-Ultimate-44B

Collections including DavidAU/Mistral-2x24B-MOE-Power-Magistral-Devstral-Reasoning-Ultimate-44B