Papers
arxiv:2508.11737

Ovis2.5 Technical Report

Published on Aug 15
ยท Submitted by runninglsy on Aug 19
#1 Paper of the day
Authors:
,
Yu Xia ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Ovis2.5, a native-resolution vision transformer with multimodal reasoning, achieves state-of-the-art performance on various benchmarks through advanced training techniques and efficient scaling methods.

AI-generated summary

We present Ovis2.5, a successor to Ovis2 designed for native-resolution visual perception and strong multimodal reasoning. Ovis2.5 integrates a native-resolution vision transformer that processes images at their native, variable resolutions, avoiding the degradation from fixed-resolution tiling and preserving both fine detail and global layout -- crucial for visually dense content like complex charts. To strengthen reasoning, we train the model to move beyond linear chain-of-thought and perform reflection -- including self-checking and revision. This advanced capability is exposed as an optional "thinking mode" at inference time, allowing users to trade latency for enhanced accuracy on difficult inputs. The model is trained via a comprehensive five-phase curriculum that progressively builds its skills. The process begins with foundational visual and multimodal pretraining, advances through large-scale instruction tuning, and culminates in alignment and reasoning enhancement using DPO and GRPO. To scale these upgrades efficiently, we employ multimodal data packing and hybrid parallelism, yielding a significant end-to-end speedup. We release two open-source models: Ovis2.5-9B and Ovis2.5-2B. The latter continues the "small model, big performance" philosophy of Ovis2, making it ideal for resource-constrained, on-device scenarios. On the OpenCompass multimodal leaderboard, Ovis2.5-9B averages 78.3, marking a substantial improvement over its predecessor, Ovis2-8B, and achieving state-of-the-art results among open-source MLLMs in the sub-40B parameter range; Ovis2.5-2B scores 73.9, establishing SOTA for its size. Beyond aggregate scores, Ovis2.5 achieves leading results on STEM benchmarks, exhibits strong capabilities on grounding and video tasks, and achieves open-source SOTA at its scale for complex chart analysis.

Community

Paper author Paper submitter
  • Github: github.com/AIDC-AI/Ovis
  • 9B Model: huggingface.co/AIDC-AI/Ovis2.5-9B
  • 2B Model: huggingface.co/AIDC-AI/Ovis2.5-2B
  • 9B Demo: huggingface.co/spaces/AIDC-AI/Ovis2.5-9B
  • 2B Demo: huggingface.co/spaces/AIDC-AI/Ovis2.5-2B

Appreciate your great effort xD

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2508.11737 in a dataset README.md to link it from this page.

Spaces citing this paper 3

Collections including this paper 2