--- license: apache-2.0 datasets: - unsloth/OpenMathReasoning-mini - mlabonne/FineTome-100k language: - en base_model: - Qwen/Qwen3-4B pipeline_tag: text-generation library_name: transformers tags: - moe - math - code - text-generation-inference - trl --- ![Draconis.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/CpdC-5a9DZO7NMY6DcW9M.png) # Draconis-Qwen3\_Math-4B-Preview > **Draconis-Qwen3\_Math-4B-Preview** is fine-tuned on the **Qwen3-4B** architecture, optimized for excellence in **mathematical reasoning**, **logical problem solving**, and **structured content generation**. This preview model focuses on precision, step-by-step reasoning, and efficient inference, making it ideal for educational and technical applications where reliability and compact performance are essential. > [!note] GGUF [Q4_K_M] : https://huggingface.co/prithivMLmods/Draconis-Qwen3_Math-4B-Preview-Q4_K_M-GGUF > [!note] GGUF [Q5_K_M] : https://huggingface.co/prithivMLmods/Draconis-Qwen3_Math-4B-Preview-Q5_K_M-GGUF ## Key Features 1. **Mathematical and Logical Reasoning** Finetuned to solve symbolic logic, arithmetic, and multi-step mathematical problems, making it ideal for STEM learning, competitions, and educational use. 2. **Compact Code Understanding** Efficient in writing and interpreting code in Python, JavaScript, and other languages, suitable for lightweight coding tasks and algorithmic explanations. 3. **Factual Precision** Trained on high-quality, curated data with reasoning benchmarks to reduce hallucinations and ensure correctness in technical outputs. 4. **Instruction-Tuned** Strong adherence to instructions, ideal for structured queries, step-by-step problem solving, and producing formatted outputs (Markdown, JSON, tables). 5. **Multilingual Support** Capable of understanding and responding in over 20 languages, useful for multilingual education and technical translation. 6. **Efficient Performance** Based on the 4B parameter variant of Qwen3, optimized for resource-constrained environments without compromising core reasoning capability. ## Quickstart with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Draconis-Qwen3_Math-4B-Preview" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Solve the equation: 3x + 7 = 22. Show all steps." messages = [ {"role": "system", "content": "You are a step-by-step math tutor."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Intended Use * Solving math and logic problems * Code assistance and basic debugging * Education-focused applications (STEM tutoring) * Structured content generation (e.g., JSON, Markdown) * Multilingual reasoning and translations * Lightweight deployment in reasoning tasks ## Limitations * Limited creativity in open-ended or fictional content * May struggle with ambiguous or multi-intent prompts * Smaller context window compared to 14B+ variants * Still subject to factual errors in edge cases or adversarial queries ## References 1. [AIMO-2 Winning Solution: Building State-of-the-Art Mathematical Reasoning Models] : https://arxiv.org/pdf/2504.16891 2. [YaRN: Efficient Context Window Extension of Large Language Models] : https://arxiv.org/pdf/2309.00071