YSH

BestWishYsh

AI & ML interests

None yet

Recent Activity

Organizations

ZeroGPU Explorers's profile picture

BestWishYsh's activity

replied to their post 8 days ago
reacted to AdinaY's post with 🔥 8 days ago
view post
Post
2623
🔥 New benchmark & dataset for Subject-to-Video generation

OPENS2V-NEXUS by Pekin University

✨ Fine-grained evaluation for subject consistency
BestWishYsh/OpenS2V-Eval
✨ 5M-scale dataset:
BestWishYsh/OpenS2V-5M
✨ New metrics – automatic scores for identity, realism, and text match
  • 2 replies
·
replied to AdinaY's post 8 days ago
reacted to AdinaY's post with ❤️ 8 days ago
view post
Post
2623
🔥 New benchmark & dataset for Subject-to-Video generation

OPENS2V-NEXUS by Pekin University

✨ Fine-grained evaluation for subject consistency
BestWishYsh/OpenS2V-Eval
✨ 5M-scale dataset:
BestWishYsh/OpenS2V-5M
✨ New metrics – automatic scores for identity, realism, and text match
  • 2 replies
·
reacted to their post with 🚀🔥 9 days ago
view post
Post
2552
Introducing our new work: OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation​​ 🚀

We tackle the core challenges of ​​Subject-to-Video Generation (S2V)​​ by systematically building the first complete infrastructure—featuring an evaluation benchmark and a million-scale dataset! ✨

🧠 Introducing ​​OpenS2V-Eval​​—the first ​​fine-grained S2V benchmark​​, with ​​180 multi-domain prompts + real/synthetic test pairs​​. We propose ​​NexusScore​​, ​​NaturalScore​​, and ​​GmeScore​​ to precisely quantify model performance across ​​subject consistency, naturalness, and text alignment​​ ✔

📊 Using this framework, we conduct a ​​comprehensive evaluation of 16 leading S2V models​​, revealing their strengths/weaknesses in complex scenarios!

🔥 ​​OpenS2V-5M dataset​​ now available! A ​​5.4M 720P HD​​ collection of ​​subject-text-video triplets​​, enabled by ​​cross-video association segmentation + multi-view synthesis​​ for ​​diverse subjects & high-quality annotations​​ 🚀

​​All resources open-sourced​​: Paper, Code, Data, and Evaluation Tools 📄
Let's advance S2V research together! 💡

🔗 ​​Links​​:
Paper: OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation (2505.20292)
Code: https://github.com/PKU-YuanGroup/OpenS2V-Nexus
Project: https://pku-yuangroup.github.io/OpenS2V-Nexus
LeaderBoard: BestWishYsh/OpenS2V-Eval
OpenS2V-Eval: BestWishYsh/OpenS2V-Eval
OpenS2V-5M: BestWishYsh/OpenS2V-5M
  • 1 reply
·
posted an update 9 days ago
view post
Post
2552
Introducing our new work: OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation​​ 🚀

We tackle the core challenges of ​​Subject-to-Video Generation (S2V)​​ by systematically building the first complete infrastructure—featuring an evaluation benchmark and a million-scale dataset! ✨

🧠 Introducing ​​OpenS2V-Eval​​—the first ​​fine-grained S2V benchmark​​, with ​​180 multi-domain prompts + real/synthetic test pairs​​. We propose ​​NexusScore​​, ​​NaturalScore​​, and ​​GmeScore​​ to precisely quantify model performance across ​​subject consistency, naturalness, and text alignment​​ ✔

📊 Using this framework, we conduct a ​​comprehensive evaluation of 16 leading S2V models​​, revealing their strengths/weaknesses in complex scenarios!

🔥 ​​OpenS2V-5M dataset​​ now available! A ​​5.4M 720P HD​​ collection of ​​subject-text-video triplets​​, enabled by ​​cross-video association segmentation + multi-view synthesis​​ for ​​diverse subjects & high-quality annotations​​ 🚀

​​All resources open-sourced​​: Paper, Code, Data, and Evaluation Tools 📄
Let's advance S2V research together! 💡

🔗 ​​Links​​:
Paper: OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation (2505.20292)
Code: https://github.com/PKU-YuanGroup/OpenS2V-Nexus
Project: https://pku-yuangroup.github.io/OpenS2V-Nexus
LeaderBoard: BestWishYsh/OpenS2V-Eval
OpenS2V-Eval: BestWishYsh/OpenS2V-Eval
OpenS2V-5M: BestWishYsh/OpenS2V-5M
  • 1 reply
·
reacted to their post with 👀🔥 2 months ago
posted an update 2 months ago