GGUF Quantized Versions

This repository contains GGUF quantized versions of the original XiaomiMiMo/MiMo-VL-7B-SFT model, optimized for use with llama.cpp and other GGUF-compatible inference engines.

Available Files

MiMo-VL-7B-SFT-BF16.gguf

  • Description: BF16 precision GGUF version. Highest quality with larger file size.
  • Use Case: Best for systems with ample memory and requiring maximum precision.
  • Approximate Size: ~13-14GB

MiMo-VL-7B-SFT-Q4_K_M.gguf

  • Description: Q4_K_M quantized GGUF version. Good balance of quality and size.
  • Use Case: Recommended for most users. Good performance with reasonable memory usage.
  • Approximate Size: ~4-5GB

MiMo-VL-7B-SFT-Q8_0.gguf

  • Description: Q8_0 quantized GGUF version. High quality with moderate compression.
  • Use Case: Good choice when you need better quality than Q4 but smaller than BF16.
  • Approximate Size: ~7-8GB

Usage

These GGUF files can be used with:

Example with llama.cpp:

./main -m MiMo-VL-7B-SFT-Q4_K_M.gguf -p "Your prompt here"

Original Model Information

Xiaomi-MiMo

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
MiMo-VL Technical Report
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”



I. Introduction

In this report, we share our efforts to build a compact yet powerful VLM, MiMo-VL-7B. MiMo-VL-7B comprises (1) a native resolution ViT encoder that preserves fine-grained visual details, (2) an MLP projector for efficient cross-modal alignment, and (3) our MiMo-7B language model, specifically optimized for complex reasoning tasks.

The development of MiMo-VL-7B involves two sequential training processes: (1) A four-stage pre-training phase, which includes projector warmup, vision-language alignment, general multi-modal pre-training, and long-context Supervised Fine-Tuning (SFT). This phase yields the MiMo-VL-7B-SFT model. (2) A subsequent post-training phase, where we introduce Mixed On-policy Reinforcement Learning (MORL), a novel framework that seamlessly integrates diverse reward signals spanning perception accuracy, visual grounding precision, logical reasoning capabilities, and human/AI preferences. This phase yields the MiMo-VL-7B-RL model.

We open-source MiMo-VL-7B series, including checkpoints of the SFT and RL model. We believe this report along with the models will provide valuable insights to develop powerful reasoning VLMs that benefit the larger community.

๐Ÿ›ค๏ธ During this journey, we find

  • Incorporating high-quality, broad-coverage reasoning data from the pre-training stage is crucial for enhancing model performance
    • We curate high-quality reasoning data by identifying diverse queries, employing large reasoning models to regenerate responses with long CoT, and applying rejection sampling to ensure quality.
    • Rather than treating this as supplementary fine-tuning data, we incorporate substantial volumes of this synthetic reasoning data directly into the later pre-training stages, where extended training yields continued performance improvements without saturation.
  • Mixed On-policy Reinforcement Learning further enhances model performance, while achieving stable simultaneous improvements remains challenging
    • We apply RL across diverse capabilities, including reasoning, perception, grounding, and human preference alignment, spanning modalities including text, images, and videos. While this hybrid training approach further unlock modelโ€™s potential, interference across data domains remains a challenge.

II. Model Details

Models are available at Huggingface Collections: MiMo-VL and ModelScope Collections: MiMo-VL

Model Description Download (HuggingFace) Download (ModelScope)
MiMo-VL-7B-SFT VLM with extraordinary reasoning potential after 4-stage pre-training ๐Ÿค— XiaomiMiMo/MiMo-VL-7B-SFT ๐Ÿค–๏ธ XiaomiMiMo/MiMo-VL-7B-SFT
MiMo-VL-7B-RL RL model leapfrogging existing open-source models ๐Ÿค— XiaomiMiMo/MiMo-VL-7B-RL ๐Ÿค–๏ธ XiaomiMiMo/MiMo-VL-7B-RL

III. Evaluation Results

General Capabilities

In general visual-language understanding, MiMo-VL-7B models achieve state-of-the-art open-source results.

Reasoning Tasks

In multi-modal reasoning, both the SFT and RL models significantly outperform all compared open-source baselines across these benchmarks.

Results marked with * are obtained using our evaluation framework. Tasks with ${\dagger}$ are evaluated by GPT-4o.

GUI Tasks

MiMo-VL-7B-RL possess exceptional GUI understanding and grounding capabilities. As a general-purpose VL model, MiMo-VL achieves comparable or even superior performance to GUI-specialized models.

Elo Rating

With our in-house evaluation dataset and GPT-4o judgments, MiMo-VL-7B-RL achieves the highest Elo rating among all evaluated open-source vision-language models, ranking first across models spanning from 7B to 72B parameters.

IV. Deployment

The MiMo-VL-7B series maintain full compatibility with the Qwen2_5_VLForConditionalGeneration architecture for deployment and inference.

V. Citation

@misc{coreteam2025mimovl,
      title={MiMo-VL Technical Report}, 
      author={{Xiaomi LLM-Core Team}},
      year={2025},
      url={https://github.com/XiaomiMiMo/MiMo-VL}, 
}

VI. Contact

Please contact us at [email protected] or open an issue if you have any questions.

Downloads last month
49
GGUF
Model size
7.62B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ 2 Ask for provider support