Model Card for WeThink-Qwen2.5VL-7B

Repository: https://github.com/yangjie-cv/WeThink

Paper: https://arxiv.org/abs/2506.07905

πŸ† Performance Highlights

WeThink-Qwen2.5VL-7B achieves:

πŸš€ Quick Start

Inference

git clone https://github.com/yangjie-cv/WeThink
cd WeThink
python inference.py

πŸ’‘ ​​Note​​: System prompt is required during inference.

πŸ“Š Evaluation

We have integrated WeThink-Qwen2.5VL-7B into the VLMEvalKit. Please follow its Quickstart guide to evaluate WeThink-Qwen2.5VL-7B on various benchmarks.

Citation

@misc{yang2025wethink,
      title={WeThink: Toward General-purpose Vision-Language Reasoning via Reinforcement Learning}, 
      author={Jie Yang and Feipeng Ma and Zitian Wang and Dacheng Yin and Kang Rong and Fengyun Rao and Ruimao Zhang},
      year={2025},
      eprint={2506.07905},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2506.07905}, 
}
Downloads last month
2,839
Safetensors
Model size
8.29B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yangjie-cv/WeThink-Qwen2.5VL-7B

Quantizations
2 models