3.png

Magpie-Qwen-DiMind-1.7B

Magpie-Qwen-DiMind-1.7B is a compact yet powerful model for mathematical reasoning, code generation, and structured output tasks, built with a dual-intelligence architecture (DiMind) to handle both quick-response prompts and deep, multi-step problems. With a parameter size of 1.7B, it balances performance and efficiency, using 80% of the Magpie Pro 330k dataset and a modular blend of additional datasets for general-purpose and technical tasks.

GGUF: https://huggingface.co/prithivMLmods/Magpie-Qwen-DiMind-1.7B-GGUF


Key Features

  1. Dual-Intelligence Architecture (DiMind) Integrates rapid-response capabilities for straightforward queries and deep analytical pathways for complex tasks like proofs, derivations, and recursive logic.

  2. Magpie-Tuned Reasoning Core Fine-tuned with 80% of Magpie Pro 330k and curated modular datasets to enhance accuracy, clarity, and depth in math, code, and structured generation.

  3. Mathematical Depth Performs exceptionally on algebra, geometry, calculus, and symbolic logic. Ideal for tutoring, competitions, and academic support.

  4. Lightweight Coding Assistant Understands and writes concise, readable code in Python, JavaScript, and other major languages, including step-by-step breakdowns and bug explanation.

  5. Structured Output Mastery Generates content in structured formats like JSON, Markdown, and LaTeX; ideal for documentation, data templates, and educational materials.

  6. Multilingual Reasoning Handles technical reasoning and translation in over 20 languages, broadening accessibility in global education and multilingual workflows.

  7. Efficient for Mid-Resource Environments The 1.7B parameter count enables excellent reasoning without requiring high-end infrastructure—suitable for local deployment and edge inference.


Quickstart with Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Magpie-Qwen-DiMind-1.7B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Solve the equation: 2(x - 4) + 3 = 11. Show all steps."

messages = [
    {"role": "system", "content": "You are a step-by-step math tutor."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Intended Use

  • Advanced math and symbolic problem-solving
  • Code generation, review, and explanation
  • Technical and structured content generation (JSON, Markdown, LaTeX)
  • Educational tutoring and reasoning in multiple languages
  • Deployment in academic, professional, and resource-aware environments

Limitations

  • May produce shallow answers in open-ended creative tasks
  • Smaller context window than 7B+ models—best suited for focused reasoning
  • Reasoning fidelity may reduce in edge-case or adversarial queries
  • Multilingual fluency is geared toward technical use cases, not general conversation

References

  1. Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing
  2. Qwen2.5 Technical Report
  3. YaRN: Efficient Context Window Extension of Large Language Models
Downloads last month
0
Safetensors
Model size
1.72B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Magpie-Qwen-DiMind-1.7B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(56)
this model
Quantizations
1 model

Datasets used to train prithivMLmods/Magpie-Qwen-DiMind-1.7B

Collection including prithivMLmods/Magpie-Qwen-DiMind-1.7B