OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation
Abstract
OpenS2V-Nexus provides benchmarks and a large dataset to evaluate and advance Subject-to-Video (S2V) generation, focusing on subject consistency and naturalness in generated videos.
Subject-to-Video (S2V) generation aims to create videos that faithfully incorporate reference content, providing enhanced flexibility in the production of videos. To establish the infrastructure for S2V generation, we propose OpenS2V-Nexus, consisting of (i) OpenS2V-Eval, a fine-grained benchmark, and (ii) OpenS2V-5M, a million-scale dataset. In contrast to existing S2V benchmarks inherited from VBench that focus on global and coarse-grained assessment of generated videos, OpenS2V-Eval focuses on the model's ability to generate subject-consistent videos with natural subject appearance and identity fidelity. For these purposes, OpenS2V-Eval introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore, to accurately align human preferences with S2V benchmarks, we propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive evaluation of 16 representative S2V models, highlighting their strengths and weaknesses across different content. Moreover, we create the first open-source large-scale S2V generation dataset OpenS2V-5M, which consists of five million high-quality 720P subject-text-video triples. Specifically, we ensure subject-information diversity in our dataset by (1) segmenting subjects and building pairing information via cross-video associations and (2) prompting GPT-Image-1 on raw frames to synthesize multi-view representations. Through OpenS2V-Nexus, we deliver a robust infrastructure to accelerate future S2V generation research.
Community
Introducing OpenS2V-Nexus, which consists of: (i) OpenS2V-Eval, a fine-grained benchmark, and (ii) OpenS2V-5M, a million-scale dataset. Welcome to try it!
All resources are open-sourced! Let's advance S2V research together! 💡
Code: https://github.com/PKU-YuanGroup/OpenS2V-Nexus
Page: https://pku-yuangroup.github.io/OpenS2V-Nexus
LeaderBoard: https://huggingface.co/spaces/BestWishYsh/OpenS2V-Eval
OpenS2V-Eval: https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval
OpenS2V-5M: https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LoVR: A Benchmark for Long Video Retrieval in Multimodal Contexts (2025)
- HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation (2025)
- Video-Bench: Human-Aligned Video Generation Benchmark (2025)
- Face Consistency Benchmark for GenAI Video (2025)
- LOVE: Benchmarking and Evaluating Text-to-Video Generation and Video-to-Text Interpretation (2025)
- SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models (2025)
- RefVNLI: Towards Scalable Evaluation of Subject-driven Text-to-image Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend