PyTorch
qwen2_5_vl

InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback

🏠 Homepage | πŸ€— InterMT Dataset | πŸ‘ InterMT-Bench

Abstract

As multimodal large models (MLLMs) continue to advance across challenging tasks, a key question emerges: What essential capabilities are still missing? A critical aspect of human learning is continuous interaction with the environment -- not limited to language, but also involving multimodal understanding and generation. To move closer to human-level intelligence, models must similarly support multi-turn, multimodal interaction. In particular, they should comprehend interleaved multimodal contexts and respond coherently in ongoing exchanges. In this work, we present an initial exploration through the InterMT -- the first preference dataset for multi-turn multimodal interaction, grounded in real human feedback. In this exploration, we particularly emphasize the importance of human oversight, introducing expert annotations to guide the process, motivated by the fact that current MLLMs lack such complex interactive capabilities. InterMT captures human preferences at both global and local levels into nine sub-dimensions, consists of 15.6k prompts, 52.6k multi-turn dialogue instances, and 32.4k human-labeled preference pairs. To compensate for the lack of capability for multi-modal understanding and generation, we introduce an agentic workflow that leverages tool-augmented MLLMs to construct multi-turn QA instances. To further this goal, we introduce InterMT-Bench to assess the ability of MLLMs in assisting judges with multi-turn, multimodal tasks. We demonstrate the utility of InterMT through applications such as judge moderation and further reveal the multi-turn scaling law of judge model. We hope the open-source of our data can help facilitate further research on aligning current MLLMs to the next step.

InterMT

InterMT-Judge

In this repository, we introduce InterMT-Judge, a model trained based on Qwen2.5-VL-3B-Instruct. It is designed to evaluate the quality of each turn in multi-turn dialogues, achieving a 73.5% agreement rate with human assessments.

For more details and information, please visit our website

Citation

Please cite the repo if you find the model or code in this repo useful 😊

@article{chen2025intermt,
  title={InterMT: Multi-Turn Interleaved Preference Alignment with Human Feedback},
  author={Boyuan Chen and Donghai Hong and Jiaming Ji and Jiacheng Zheng and Bowen Dong and Jiayi Zhou and Kaile Wang and Josef Dai and Xuyao Wang and Wenqi Chen and Qirui Zheng and Wenxin Li and Sirui Han and Yike Guo and Yaodong Yang},
  year={2025},
  institution={Peking University and Hong Kong University of Science and Technology},
  url={https://pku-intermt.github.io},
  keywords={Multimodal Learning, Multi-Turn Interaction, Human Feedback, Preference Alignment}
}
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for PKU-Alignment/InterMT-Judge

Finetuned
(195)
this model

Dataset used to train PKU-Alignment/InterMT-Judge

Collection including PKU-Alignment/InterMT-Judge