--- license: apache-2.0 datasets: - simplescaling/s1K-1.1 - nvidia/OpenMathReasoning - mlabonne/FineTome-100k language: - en library_name: transformers base_model: - Qwen/Qwen3-4B pipeline_tag: text-generation tags: - text-generation-inference - math - sft - code --- ![zdfbdccf.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/4XCMQEsE0mv2s5rx-YdIK.png) # Crux-Qwen3\_OpenThinking-4B > **Crux-Qwen3\_OpenThinking-4B** is fine-tuned on the **Qwen3-4B** architecture, optimized for advanced **open thinking**, **mathematical reasoning**, and **logical problem solving**. This model is trained on the traces of **sk1.1**, which include 1,000 entries from the **Gemini thinking trajectory**, combined with fine-tuning on 100k tokens of **open math reasoning** data. This makes it highly effective for nuanced reasoning, educational tasks, and complex problem-solving requiring clear thought processes. > [!note] > GGUF : [https://huggingface.co/prithivMLmods/Crux-Qwen3_OpenThinking-4B-GGUF](https://huggingface.co/prithivMLmods/Crux-Qwen3_OpenThinking-4B-GGUF) ## Key Features 1. **Open and Structured Thinking** Fine-tuned on Gemini trajectory data and sk1.1 traces, enabling it to model complex thought processes, open reasoning, and multi-step problem-solving. 2. **Mathematical and Logical Reasoning** Trained with a focus on symbolic logic, arithmetic, and multi-step math problems, ideal for STEM education and technical domains. 3. **Code Understanding and Generation** Capable of writing, interpreting, and explaining code snippets in Python, JavaScript, and other languages with clarity. 4. **Factual Precision and Reliability** Curated datasets and reasoning benchmarks minimize hallucinations, ensuring trustworthy outputs for technical content. 5. **Instruction-Tuned for Clarity** Strong compliance with structured prompts, delivering step-by-step reasoning, formatted outputs (Markdown, JSON, tables), and clear explanations. 6. **Multilingual Capabilities** Supports over 20 languages for educational and technical translations across diverse linguistic contexts. 7. **Optimized Efficiency** Utilizes the 4B parameter Qwen3 base for resource-friendly deployment while maintaining strong reasoning performance. ## Quickstart with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Crux-Qwen3_OpenThinking-4B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Explain the thought process behind solving: If 5x - 3 = 2x + 12, find x." messages = [ {"role": "system", "content": "You are an open thinking tutor who explains reasoning clearly."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Intended Use * Advanced open and logical reasoning * Educational STEM tutoring and math problem solving * Code assistance, explanation, and debugging * Structured content generation (JSON, Markdown, tables) * Multilingual reasoning and translation * Lightweight, efficient deployment for reasoning tasks ## Limitations * Less suited for highly creative or fictional content generation * May require clear, unambiguous prompts for best results * Smaller context window relative to larger models (14B+) * Possible occasional factual inaccuracies in rare edge cases ## References 1. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071)