Phi-4-Mini-Reasoning (GGUF Q4_KM) - Sandlogic Lexicons
Model Summary
Phi-4-Mini-Reasoning is a lightweight open-source model from the Phi-4 family, designed with a strong focus on high-quality, reasoning-dense synthetic data. It has been further fine-tuned for advanced mathematical reasoning tasks and supports a 128K token context length. This model is especially optimized for logic-intensive scenarios while maintaining a compact size, making it ideal for memory and compute-constrained environments.
- Model Family: Phi-4
- Parameter Count: 3.8B
- Architecture: Dense decoder-only Transformer
- Context Length: 128K tokens
- Quantization: GGUF Q4_KM
- Supported Language: English
- Release Date: April 2025
- Cutoff Date: February 2025
Intended Uses
Primary Use Cases
Phi-4-Mini-Reasoning is designed to excel at:
- Multi-step mathematical reasoning
- Formal proof generation
- Symbolic computation
- Solving advanced word problems
- Tasks requiring structured logic and analytical thinking
Its high context length and reasoning capabilities make it suitable for latency-bound applications and deployments on resource-constrained hardware.
Use Case Considerations
- This model is optimized specifically for mathematical reasoning tasks.
- It is not evaluated for general-purpose downstream tasks such as conversational AI or creative writing.
- Developers should:
- Assess use case suitability.
- Account for limitations in multi-language support.
- Evaluate performance, safety, and fairness—especially in high-risk or regulated environments.
- Ensure compliance with all applicable laws and regulations (e.g., privacy and trade compliance).
Training Details
- Model Architecture: Same as Phi-4-Mini with 3.8B parameters
- Notable Enhancements:
- 200K vocabulary
- Grouped-query attention
- Shared input/output embeddings
- Training Dataset Size: 150B tokens
- Training Duration: 2 days
- Hardware Used: 128 × H100-80G GPUs
- Training Date: February 2024
- Output: Generated text
- Input Format: Text (chat-style prompts recommended)
Integration in Lexicons
This quantized GGUF Q4_KM version of Phi-4-Mini-Reasoning is included in our Sandlogic Lexicons model zoo, making it readily available for efficient inference in edge deployments and research use cases focused on math reasoning.
For optimal results, we recommend using Phi-4-Mini-Reasoning in tasks that require deep mathematical analysis and structured problem solving.
- Downloads last month
- 25
4-bit
Model tree for SandLogicTechnologies/Phi-4-mini-reasoning-GGUF
Base model
microsoft/Phi-4-mini-reasoning