MIVE: New Design and Benchmark for Multi-Instance Video Editing
Abstract
Recent AI-based video editing has enabled users to edit videos through simple text prompts, significantly simplifying the editing process. However, recent zero-shot video editing techniques primarily focus on global or single-object edits, which can lead to unintended changes in other parts of the video. When multiple objects require localized edits, existing methods face challenges, such as unfaithful editing, editing leakage, and lack of suitable evaluation datasets and metrics. To overcome these limitations, we propose a zero-shot Multi-Instance Video Editing framework, called MIVE. MIVE is a general-purpose mask-based framework, not dedicated to specific objects (e.g., people). MIVE introduces two key modules: (i) Disentangled Multi-instance Sampling (DMS) to prevent editing leakage and (ii) Instance-centric Probability Redistribution (IPR) to ensure precise localization and faithful editing. Additionally, we present our new MIVE Dataset featuring diverse video scenarios and introduce the Cross-Instance Accuracy (CIA) Score to evaluate editing leakage in multi-instance video editing tasks. Our extensive qualitative, quantitative, and user study evaluations demonstrate that MIVE significantly outperforms recent state-of-the-art methods in terms of editing faithfulness, accuracy, and leakage prevention, setting a new benchmark for multi-instance video editing. The project page is available at https://kaist-viclab.github.io/mive-site/
Community
๐ฅ โถ MIVE: New Design and Benchmark for Multi-Instance Video Editing ๐ฅ
๐๐ข๐ ๐ก๐ฅ๐ข๐ ๐ก๐ญ๐ฌ:
โ
Novel zero-shot multi-instance video editing
โ
Novel Disentangled Multi-instance Sampling (DMS) to prevent editing leakage
โ
Novel Instance-centric Probability Redistribution (IPR) to ensure precise localization and faithful editing
โ
Constructing a new MIVE Dataset for multi-instance video editing tasks
โ
Achieving SOTA performances, in terms of editing faithfulness, accuracy, and leakage prevention
๐ช Project Page: https://kaist-viclab.github.io/mive-site/
๐ Paper: https://arxiv.org/abs/2412.12877
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MoViE: Mobile Diffusion for Video Editing (2024)
- VIVID-10M: A Dataset and Baseline for Versatile and Interactive Video Local Editing (2024)
- Re-Attentional Controllable Video Diffusion Editing (2024)
- Motion Control for Enhanced Complex Action Video Generation (2024)
- DreamColour: Controllable Video Colour Editing without Training (2024)
- Pathways on the Image Manifold: Image Editing via Video Generation (2024)
- SAVEn-Vid: Synergistic Audio-Visual Integration for Enhanced Understanding in Long Video Context (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper