1.png

Magpie-Qwen-CortexDual-0.6B

Magpie-Qwen-CortexDual-0.6B is a specialized, general-purpose model designed for math, code, and structured reasoning. Built with CortexDual thinking mode, it dynamically adapts to the complexity of a problem, automatically shifting into a stepwise reasoning mode for intricate logic or math tasks. This 0.6B parameter model leverages 80% of the Magpie Pro 330k dataset and a modular blend of datasets for general-purpose proficiency and domain versatility.

GGUF : https://huggingface.co/prithivMLmods/Magpie-Qwen-CortexDual-0.6B-GGUF


Key Features

  1. Adaptive Reasoning via CortexDual Automatically switches into a deeper thinking mode for complex problems, simulating trace-style deduction for higher-order tasks in math and code.

  2. Efficient and Compact At 0.6B parameters, it is optimized for deployment in constrained environments while retaining high fidelity in logic, computation, and structural formatting.

  3. Magpie-Driven Data Synthesis Trained using 80% of Magpie Pro 330k—a high-quality alignment and reasoning dataset—complemented with curated modular datasets for enhanced general-purpose capabilities.

  4. Mathematical Precision Fine-tuned for arithmetic, algebra, calculus, and symbolic logic; ideal for STEM learning platforms, math solvers, and step-by-step tutoring.

  5. Lightweight Code Assistance Understands and generates code in Python, JavaScript, and other common languages with contextual accuracy and explanation support.

  6. Structured Output Generation Specializes in Markdown, JSON, and table outputs, suitable for technical documentation, instruction generation, and structured reasoning.

  7. Multilingual Competence Supports over 20 languages with reasoning and translation support, expanding its reach for global educational and development use.


Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Magpie-Qwen-CortexDual-0.6B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Write a Python function to check if a number is prime. Explain each step."

messages = [
    {"role": "system", "content": "You are an AI tutor skilled in both math and code."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Demo Inference

non-thinking (direct, reactive, retrieval-based responses)

1.png

thinking (reasoning, planning, deeper analysis)

3.png 4.png


Intended Use

  • General-purpose problem solving in math, logic, and code
  • Interactive STEM tutoring and reasoning explanation
  • Compact assistant for technical documentation and structured data tasks
  • Multilingual applications with a focus on accurate technical reasoning
  • Efficient offline deployment on low-resource devices

Limitations

  • Lower creativity and open-domain generation due to reasoning-focused tuning
  • Limited context window size due to compact model size
  • May produce simplified logic paths in highly abstract domains
  • Trade-offs in diversity and expressiveness compared to larger instruction-tuned models

References

  1. Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing
  2. Qwen2.5 Technical Report
  3. YaRN: Efficient Context Window Extension of Large Language Models
Downloads last month
0
Safetensors
Model size
596M params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Magpie-Qwen-CortexDual-0.6B

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(65)
this model
Quantizations
1 model

Datasets used to train prithivMLmods/Magpie-Qwen-CortexDual-0.6B

Collection including prithivMLmods/Magpie-Qwen-CortexDual-0.6B