Formulae/MITA-V1.1-7B-2-24-2025
Overview
Formulae/MITA-V1.1-7B is built for reasoning and deep thinking. It merges models designed for logical analysis, structured thought, and problem-solving. Using the SLERP merge method, it blends multiple reasoning-focused models while preserving their core strengths.
Rank | Type | Model | Average | IFEval | BBH | MATH | GPQA | MUSR | MMLU-PRO | CO₂ Cost |
---|---|---|---|---|---|---|---|---|---|---|
914 | 🤝 | formulae/mita-v1.1-7b-2-24-2025 | 29.48 % | 34.12 % | 35.44 % | 43.50 % | 8.61 % | 16.06 % | 39.15 % | 0.67 kg |
1403 | 🤝 | formulae/mita-v1.2-7b-2-24-2025 | 24.86 % | 25.64 % | 28.41 % | 48.79 % | 7.49 % | 12.63 % | 26.21 % | 0.64 kg |
Merge Details
- Base Model: open-thoughts/OpenThinker-7B
- Merged Models:
- Merge Method: SLERP (Spherical Linear Interpolation)
- Data Type: bfloat16
- Merge Parameters: V-shaped curve
What is SLERP?
SLERP (Spherical Linear Interpolation) is a technique for smoothly blending multiple models while maintaining their unique properties. Unlike simple weight averaging, SLERP preserves sharpness and structure during merging, making it effective for complex reasoning models. It was originally introduced by Ken Shoemake for quaternion interpolation in 3D rotations. In model merging, SLERP ensures a balanced mix of the component models, maintaining coherence and reducing degradation in performance.
Limitations & Risks
⚠ Misinformation – No built-in fact-checking, prone to hallucinations.
⚠ Complex Reasoning – May struggle with real-world accuracy despite logical consistency.
Usage Disclaimer
Formulae/MITA-V1.1-7B is an experimental reasoning model. It is not safe for deployment without manual oversight. Future updates will improve reliability, interpretability, and structured thought processes.
Contribute
We welcome contributions, including quantizations, fine-tuning, or improvements.
💡 Support Us: Buy Me a Coffee
📩 Contact: [email protected]
Future Development
This is part of the MITA series. We are working toward an MoE model that will combine multiple expert models for more adaptable reasoning and problem-solving.
Made possible with MergeKit.
- Downloads last month
- 18