File size: 1,946 Bytes
03581d8 320acaf 03581d8 3e38922 03581d8 7cca2cc 03581d8 f921e42 6d2e124 f921e42 03581d8 320acaf d1d0494 03581d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
language:
- en
license: apache-2.0
tags:
- transformers
- multimodal
pipeline_tag: visual-question-answering
---
# VL-Rethinker-72B
**🚀 News:** <u>We release our meticulously curated collection of RL training queries for multimodal reasoning: [ViRL39K](https://huggingface.co/datasets/TIGER-Lab/ViRL39K).</u>
**VL-Rethinker-72B** achieves SoTA results on various multimodal reasoning benchmarks.
It is trained using the **Forced Rethinking** technique, on top of [**VL-Reasoner**](https://huggingface.co/TIGER-Lab/VL-Reasoner-72B/) with **GRPO-SSR** training.
For details of our approach and performance comparison, please see our [paper](https://github.com/TIGER-AI-Lab/VL-Rethinker/blob/main/paper.pdf).
For details of training and evaluation, please see our [code repo](https://github.com/TIGER-AI-Lab/VL-Rethinker/).
Explore further via the following links:
| [**🚀Project Page**](https://tiger-ai-lab.github.io/VL-Rethinker/) | [**📖Paper**](https://arxiv.org/abs/2504.08837) | [**🔗Github**](https://github.com/TIGER-AI-Lab/VL-Rethinker/) | [**🤗Data**](https://huggingface.co/datasets/TIGER-Lab/ViRL39K) |
## Prompt
Append the following after the user query:
```python
"""Guidelines:
Please think step by step, and **regularly perform self-questioning, self-verification, self-correction to check your ongoing reasoning**, using connectives such as "Wait a moment", "Wait, does it seem right?", etc. Remember to put your final answer within \\boxed{}.
"""
```
## Citation
If you feel this model useful, please give us a free cite:
```bibtex
@article{vl-rethinker,
title={VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning},
author = {Wang, Haozhe and Qu, Chao and Huang, Zuming and Chu, Wei and Lin, Fangzhen and Chen, Wenhu},
journal={arXiv preprint arXiv:2504.08837},
year={2025}
}
```
|