Skywork-R1V2

Skywork Logo

πŸ“– R1V2 Report | πŸ’» GitHub | 🌐 ModelScope

GitHub Stars GitHub Forks

1. Model Introduction

Skywork-R1V2-38B is a state-of-the-art open-source multimodal reasoning model, achieving top-tier performance across multiple benchmarks:

  • On MMMU, it scores 73.6%, the highest among all open-source models to date.
  • On OlympiadBench, it achieves 62.6%, leading by a large margin over other open models.
  • R1V2 also performs strongly on MathVision, MMMU-Pro, and MathVista, rivaling proprietary commercial models.
  • Overall, R1V2 stands out as a high-performing, open-source VLM combining powerful visual reasoning and text understanding.

πŸ”§ Model Details

Model Name Vision Encoder Language Model Hugging Face Link
Skywork-R1V2-38B InternViT-6B-448px-V2_5 Qwen/QwQ-32B πŸ€— Link

2. Evaluation

Open Source
Comparison with Larger-Scale Open-Source Models
Proprietary
Comparison with Proprietary Models

Evaluation Results of State-of-the-Art LLMs and VLMs

Model Supports Vision Text Reasoning (%) Multimodal Reasoning (%)
AIME24 LiveCodebench liveBench IFEVAL BFCL MATH‑500 AIME 2024 GPQA MMMU(val) MathVista(mini) MathVision(mini) OlympiadBench mmmu‑pro
% % % % % pass@1 pass@1 pass@1 % % % % %
R1V2‑38B βœ… 78.9 63.6 73.2 82.9 66.3 94.0 72.0 61.6 73.6 74.0 49.0 62.6 52.0
R1V1‑38B βœ… 72.0 57.2 54.6 72.5 53.5 – – – 68.0 67.0 – 40.4 –
Deepseek‑R1‑671B ❌ 74.3 65.9 71.6 83.3 60.3 97.3 79.8 71.5 – – – – –
GPT‑o1 ❌ 79.8 63.4 72.2 – – – – – – – – – –
GPT‑o4‑mini βœ… 93.4 74.6 78.1 – – 74.6 9.3 49.9 81.6 84.3 58.0 – –
Claude 3.5 Sonnet βœ… – – – – – 78.3 16.0 65.0 66.4 65.3 – – –
Kimi k1.5 long-cot βœ… – – – – – 96.2 77.5 – 70.0 74.9 – – –
Qwen2.5‑VL‑72B‑Instruct βœ… – – – – – – – – 70.2 74.8 – – –
InternVL2.5‑78B βœ… – – – – – – – – 70.1 72.3 – 33.2 –

3. Usage

1. Clone the Repository

git clone https://github.com/SkyworkAI/Skywork-R1V.git
cd skywork-r1v/inference

2. Set Up the Environment

# For Transformers  
conda create -n r1-v python=3.10 && conda activate r1-v  
bash setup.sh  
# For vLLM  
conda create -n r1v-vllm python=3.10 && conda activate r1v-vllm  
pip install -U vllm

3. Run the Inference Script

transformers inference

CUDA_VISIBLE_DEVICES="0,1" python inference_with_transformers.py \
    --model_path path \
    --image_paths image1_path \
    --question "your question"

vllm inference

python inference_with_vllm.py \
    --model_path path \
    --image_paths image1_path image2_path \
    --question "your question" \
    --tensor_parallel_size 4

4. Citation

If you use Skywork-R1V in your research, please cite:

@misc{chris2025skyworkr1v2multimodalhybrid,
      title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning}, 
      author={Chris and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
      year={2025},
      eprint={2504.16656},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.16656}, 
}
@misc{peng2025skyworkr1vpioneeringmultimodal,
      title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought}, 
      author={Yi Peng and Chris and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
      year={2025},
      eprint={2504.05599},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.05599}, 
}

This project is released under an open-source license.

Downloads last month
128
Safetensors
Model size
38.4B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Skywork/Skywork-R1V2-38B