--- license: mit library_name: transformers pipeline_tag: image-text-to-text --- # Skywork-R1V3-38B-AWQ
Introduction Image
## 📖 [R1V3 Report](https://arxiv.org/abs/2507.06167) | 💻 [GitHub](https://github.com/SkyworkAI/Skywork-R1V) | 🌐 [ModelScope](https://modelscope.cn/models/Skywork/Skywork-R1V3-38B)
[![GitHub Stars](https://img.shields.io/github/stars/SkyworkAI/Skywork-R1V)](https://github.com/SkyworkAI/Skywork-R1V/stargazers)[![GitHub Forks](https://img.shields.io/github/forks/SkyworkAI/Skywork-R1V)](https://github.com/SkyworkAI/Skywork-R1V/fork)
## Evaluation
Comprehensive performance comparison across text and multimodal reasoning benchmarks.
Model MMMU MathVista
Proprietary Models
Claude-3.7-Sonnet 75.0 66.8
OpenAI-4o 70.7 62.9
Open-Source Models
InternVL3-78B 72.2 72.2
Qwen2.5-VL-72B 70.3 74.8
QvQ-Preview-72B 70.3 71.4
Skywork-R1V3 76.0 77.1
Skywork-R1V3-AWQ 66.7 70.5
## Usage You can use the quantized model with different inference frameworks: ### Using VLLM #### Python API ```python import os from vllm import LLM, SamplingParams from vllm.entrypoints.chat_utils import load_chat_template model_name = "Skywork/Skywork-R1V3-38B-AWQ" # or local path llm = LLM(model_name, dtype='float16', quantization="awq", gpu_memory_utilization=0.9, max_model_len=4096, trust_remote_code=True, ) # Add your inference code here ``` #### OpenAI-compatible API Server ```bash MODEL_ID="Skywork/Skywork-R1V3-38B-AWQ" # or local path CUDA_VISIBLE_DEVICES=0 \ python -m vllm.entrypoints.openai.api_server \ --model $MODEL_ID \ --dtype float16 \ --quantization awq \ --port 23334 \ --max-model-len 12000 \ --gpu-memory-utilization 0.9 \ --trust-remote-code ``` ### Using LMDeploy ```python import os from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig from lmdeploy.vl import load_image model_path = "Skywork/Skywork-R1V3-38B-AWQ" # or local path engine_config = TurbomindEngineConfig(cache_max_entry_count=0.75) chat_template_config = ChatTemplateConfig(model_name=model_path) pipe = pipeline(model_path, backend_config=engine_config, chat_template_config=chat_template_config, ) # Example: Multimodal inference image = load_image('table.jpg') response = pipe(('Describe this image?', image)) print(response.text) ``` ## Hardware Requirements The AWQ quantization reduces the memory footprint compared to the original FP16 model. We recommend: - At least one GPU with 30GB+ VRAM for inference - For optimal performance with longer contexts, 40GB+ VRAM is recommended ## Citation If you use this model in your research, please cite: ```bibtex @misc{shen2025skyworkr1v3technicalreport, title={Skywork-R1V3 Technical Report}, author={Wei Shen and Jiangbo Pei and Yi Peng and Xuchen Song and Yang Liu and Jian Peng and Haofeng Sun and Yunzhuo Hao and Peiyu Wang and Jianhao Zhang and Yahui Zhou}, year={2025}, eprint={2507.06167}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2507.06167}, } ```