Skywork-R1V
Collection
pioneering multimodal reasoning with cot
•
5 items
•
Updated
•
8
Benchmark | LLM | VLM | |||||
---|---|---|---|---|---|---|---|
QwQ-32B-Preview | InternVL-2.5-38B | VILA 1.5-40B | InternVL2-40B | Skywork-R1V-38B | Skywork-R1V-AWQ | ||
Reasoning | MATH-500 | 90.6 | - | - | - | 94.0 | 86.0 |
AIME 2024 | 50.0 | - | - | - | 72.0 | 61.0 | |
GPQA | 54.5 | - | - | - | 61.6 | 56.5 | |
Vision | MathVista(mini) | - | 71.9 | 49.5 | 63.7 | 67.5 | 59.9 |
MMMU(Val) | - | 63.9 | 55.1 | 55.2 | 69.0 | 60.1 |
You can use the quantized model with different inference frameworks:
import os
from vllm import LLM, SamplingParams
from vllm.entrypoints.chat_utils import load_chat_template
model_name = "Skywork/Skywork-R1V-38B-AWQ" # or local path
llm = LLM(model_name,
dtype='float16',
quantization="awq",
gpu_memory_utilization=0.85,
max_model_len=4096,
trust_remote_code=True,
)
# Add your inference code here
MODEL_ID="Skywork/Skywork-R1V-38B-AWQ" # or local path
CUDA_VISIBLE_DEVICES=0 \
python -m vllm.entrypoints.openai.api_server \
--model $MODEL_ID \
--dtype float16 \
--quantization awq \
--port 23334 \
--max-model-len 12000 \
--gpu-memory-utilization 0.9 \
--trust-remote-code
import os
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model_path = "Skywork/Skywork-R1V-38B-AWQ" # or local path
engine_config = TurbomindEngineConfig(cache_max_entry_count=0.75)
chat_template_config = ChatTemplateConfig(model_name=model_path)
pipe = pipeline(model_path,
backend_config=engine_config,
chat_template_config=chat_template_config,
)
# Example: Multimodal inference
image = load_image('table.jpg')
response = pipe(('Describe this image?', image))
print(response.text)
The AWQ quantization reduces the memory footprint compared to the original FP16 model. We recommend:
If you use this model in your research, please cite:
@misc{shen2025skyworkr1v3technicalreport,
title={Skywork-R1V3 Technical Report},
author={Wei Shen and Jiangbo Pei and Yi Peng and Xuchen Song and Yang Liu and Jian Peng and Haofeng Sun and Yunzhuo Hao and Peiyu Wang and Jianhao Zhang and Yahui Zhou},
year={2025},
eprint={2507.06167},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.06167},
}
@misc{chris2025skyworkr1v2multimodalhybrid,
title={Skywork R1V2: Multimodal Hybrid Reinforcement Learning for Reasoning},
author={Peiyu Wang and Yichen Wei and Yi Peng and Xiaokun Wang and Weijie Qiu and Wei Shen and Tianyidan Xie and Jiangbo Pei and Jianhao Zhang and Yunzhuo Hao and Xuchen Song and Yang Liu and Yahui Zhou},
year={2025},
eprint={2504.16656},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.16656},
}
@misc{peng2025skyworkr1vpioneeringmultimodal,
title={Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought},
author={Yi Peng and Peiyu Wang and Xiaokun Wang and Yichen Wei and Jiangbo Pei and Weijie Qiu and Ai Jian and Yunzhuo Hao and Jiachun Pan and Tianyidan Xie and Li Ge and Rongxian Zhuang and Xuchen Song and Yang Liu and Yahui Zhou},
year={2025},
eprint={2504.05599},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.05599},
}