---
title: OpenS2V Eval
emoji: 📊
colorFrom: gray
colorTo: blue
sdk: gradio
sdk_version: 5.31.0
app_file: app.py
pinned: false
license: apache-2.0
short_description: A Detailed Benchmark for Subject-to-Video Generation
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/63468720dd6d90d82ccf3450/N9kKR052363-MYkJkmD2V.png
---
If you like our project, please give us a star ⭐ on GitHub for the latest update.
## ✨ Summary
**OpenS2V-Eval** introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore,
to accurately align human preferences with S2V benchmarks, we propose three automatic metrics: **NexusScore**, **NaturalScore**, **GmeScore**
to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive
evaluation of 14 representative S2V models, highlighting their strengths and weaknesses across different content.
## 📣 Evaluate Your Own Models
For how to evaluate your customized model like OpenS2V-Eval in the [OpenS2V-Nexus paper](https://huggingface.co/papers/2505.20292), please refer to [here](https://github.com/PKU-YuanGroup/OpenS2V-Nexus/tree/main/eval).
## ⚙️ Get Videos Generated by Different S2V models
For more details, please refer to [here](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval/tree/main/Results).
## 💡 Description
- **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval)
- **Paper:** [https://huggingface.co/papers/2505.20292](https://huggingface.co/papers/2505.20292)
- **Point of Contact:** [Shenghai Yuan](shyuan-cs@hotmail.com)
## ✏️ Citation
If you find our paper and code useful in your research, please consider giving a star and citation.
```BibTeX
@article{yuan2025opens2v,
title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation},
author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Luo, Jiebo and Yuan, Li},
journal={arXiv preprint arXiv:2505.20292},
year={2025}
}
```