Overview
The remarkable reasoning capbility of Large Language Models (LLMs) stems from cognitive behaviors that emerge when reinforcing against verifiable rewards. This work investigates how to transfer this principle to Multimodal LLMs (MLLMs) to unlock advanced visual reasoning.
We introduce a two-stage paradigm built on Qwen2.5-VL-7B: a massive text-only cold-start fine-tuning, followed by multimodal reinforcement learning (RL) spanning nearly 1,000 steps—surpassing all prior open-source efforts in scale. This pioneering work reveals three fundamental insights:
- Behavior transfer emerges surprisingly early in cold start due to linguistic mental imagery.
- Cold start broadly memorizes visual behaviors, while RL critically discerns and scales up effective patterns.
- Transfer strategically favors high-utility behaviors such as visual reflection.
Our resulting model, Open-Vision-Reasoner (OVR), achieves state-of-the-art performance on a suite of reasoning benchmarks, including 95.3% on MATH500, 51.8% on MathVision and 54.6% on MathVerse. We release our model, data, and training dynamics to catalyze the development of more capable, behavior-aligned multimodal reasoners.
Model Card
Model | Description | Download |
---|---|---|
OVR-7B-ColdStart | Intermediate model after massive language-only cold-start fine-tuning | 🤗 OVR-7B-ColdStart |
OVR-7B-RL | Final model after large-scale multimodal RL training | 🤗 OVR-7B-RL |
Performance
Training Dynamics and Performance Evolution
![]() |
![]() |
Model Deployment
vllm serve Kangheng/OVR-7B-ColdStart --port 8000 --host 0.0.0.0 --tensor-parallel-size 1 --gpu-memory-utilization 0.6
- Downloads last month
- 19
Model tree for Kangheng/OVR-7B-RL
Base model
Qwen/Qwen2.5-VL-7B-Instruct