Papers
arxiv:2505.20292

OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation

Published on May 26
· Submitted by BestWishYsh on May 28
Authors:
,
,
,

Abstract

OpenS2V-Nexus provides benchmarks and a large dataset to evaluate and advance Subject-to-Video (S2V) generation, focusing on subject consistency and naturalness in generated videos.

AI-generated summary

Subject-to-Video (S2V) generation aims to create videos that faithfully incorporate reference content, providing enhanced flexibility in the production of videos. To establish the infrastructure for S2V generation, we propose OpenS2V-Nexus, consisting of (i) OpenS2V-Eval, a fine-grained benchmark, and (ii) OpenS2V-5M, a million-scale dataset. In contrast to existing S2V benchmarks inherited from VBench that focus on global and coarse-grained assessment of generated videos, OpenS2V-Eval focuses on the model's ability to generate subject-consistent videos with natural subject appearance and identity fidelity. For these purposes, OpenS2V-Eval introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore, to accurately align human preferences with S2V benchmarks, we propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive evaluation of 16 representative S2V models, highlighting their strengths and weaknesses across different content. Moreover, we create the first open-source large-scale S2V generation dataset OpenS2V-5M, which consists of five million high-quality 720P subject-text-video triples. Specifically, we ensure subject-information diversity in our dataset by (1) segmenting subjects and building pairing information via cross-video associations and (2) prompting GPT-Image-1 on raw frames to synthesize multi-view representations. Through OpenS2V-Nexus, we deliver a robust infrastructure to accelerate future S2V generation research.

Community

Paper author Paper submitter
edited 4 days ago

Introducing ​​OpenS2V-Nexus​​, which consists of: (i) ​​OpenS2V-Eval​​, a fine-grained benchmark, and (ii) ​​OpenS2V-5M​​, a million-scale dataset. Welcome to try it!

​​All resources are open-sourced​​! Let's advance S2V research together! 💡

Code: https://github.com/PKU-YuanGroup/OpenS2V-Nexus
Page: https://pku-yuangroup.github.io/OpenS2V-Nexus
LeaderBoard: https://huggingface.co/spaces/BestWishYsh/OpenS2V-Eval
OpenS2V-Eval: https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval
OpenS2V-5M: https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M

Paper author Paper submitter

For the first time, we introduce Nexus Data in OpenS2V-5M, which achieves higher data quality than previous methods.

447234332-86387045-f07b-422c-9f26-c2afc6368e72.png

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 2

Spaces citing this paper 1

Collections including this paper 2