Update README.md
Browse files- README.md +81 -3
- pipeline.jpg +3 -0
- quantitative.png +3 -0
- teaser.jpg +3 -0
README.md
CHANGED
@@ -1,3 +1,81 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
# *Omni-Effects*: Unified and Spatially-Controllable Visual Effects Generation
|
5 |
+
|
6 |
+
|
7 |
+
[](https://arxiv.org/abs/2508.07981)
|
8 |
+
[](https://amap-ml.github.io/Omni-Effects.github.io/)
|
9 |
+
[](https://huggingface.co/datasets/GD-ML/Omni-VFX)
|
10 |
+
[](https://huggingface.co/GD-ML/Omni-Effects)
|
11 |
+
|
12 |
+
# 🔥 Updates
|
13 |
+
|
14 |
+
- [2025/08] We release the CogVideoX-1.5 finetuned on our Omni-VFX dataset !
|
15 |
+
- [2025/08] We release the controllable single-VFX/Multi-VFX version of Omni-Effects!
|
16 |
+
|
17 |
+
# 📣 Overview
|
18 |
+
|
19 |
+
<p align="center">
|
20 |
+
<img src="teaser.jpg" width="100%"/>
|
21 |
+
</p>
|
22 |
+
|
23 |
+
Visual effects (VFX) are essential visual enhancements fundamental to modern cinematic production. Although video generation models offer cost-efficient solutions for VFX production, current methods are constrained by per-effect LoRA training, which limits generation to single effects. This fundamental limitation impedes applications that require spatially controllable composite effects, i.e., the concurrent generation of multiple effects at designated locations. However, integrating diverse effects into a unified framework faces major challenges: interference from effect variations and spatial uncontrollability during multi-VFX joint training. To tackle these challenges, we propose *Omni-Effects*, a first unified framework capable of generating prompt-guided effects and spatially controllable composite effects. The core of our framework comprises two key innovations: (1) **LoRA-based Mixture of Experts (LoRA-MoE)**, which employs a group of expert LoRAs, integrating diverse effects within a unified model while effectively mitigating cross-task interference. (2) **Spatial-Aware Prompt (SAP)** incorporates spatial mask information into the text token, enabling precise spatial control. Furthermore, we introduce an Independent-Information Flow (IIF) module integrated within the SAP, isolating the control signals corresponding to individual effects to prevent any unwanted blending. To facilitate this research, we construct a comprehensive VFX dataset *Omni-VFX* via a novel data collection pipeline combining image editing and First-Last Frame-to-Video (FLF2V) synthesis, and introduce a dedicated VFX evaluation framework for validating model performance. Extensive experiments demonstrate that *Omni-Effects* achieves precise spatial control and diverse effect generation, enabling users to specify both the category and location of desired effects.
|
24 |
+
|
25 |
+
# 🔨 Installation
|
26 |
+
|
27 |
+
```shell
|
28 |
+
git clone https://github.com/AMAP-ML/Omni-Effects.git
|
29 |
+
cd Omni-Effects
|
30 |
+
|
31 |
+
conda create -n OmniEffects python=3.10.14
|
32 |
+
pip install -r requirements.txt
|
33 |
+
```
|
34 |
+
Download checkpoints from HuggingFace and put it in `checkpoints`.
|
35 |
+
|
36 |
+
# 🔧 Usage
|
37 |
+
|
38 |
+
## Omni-VFX dataset and prompt-guided VFX
|
39 |
+
|
40 |
+
We have released the most comprehensive VFX dataset currently available on HuggingFace. The dataset primarily consists of three sources: assets from [Open-VFX dataset](https://huggingface.co/datasets/sophiaa/Open-VFX), distillations of VFX provided by [Remade-AI](https://huggingface.co/Remade-AI), and VFX videos created using FLF2V. Due to copyright restrictions, a small portion of the videos cannot be publicly shared. Additionally, we provide the CogVideoX1.5 model, fine-tuned on our Omni-VFX dataset. This model enables prompt-guided VFX video generation. The prompts are refered to `VFX-prompts.txt`.
|
41 |
+
|
42 |
+
```shell
|
43 |
+
sh scripts/prompt_guided_VFX.sh # modify the prompt and input image
|
44 |
+
```
|
45 |
+
|
46 |
+
## SPA-guided spatially controllable VFX
|
47 |
+
|
48 |
+
Current SPA-guided spatially controllable VFX supports controllable **"Melt it", "Levitate it", "Explode it", "Turn it into anime style" and "Change the setting to a winter scene"**.
|
49 |
+
|
50 |
+
### Single-VFX
|
51 |
+
```shell
|
52 |
+
sh scripts/inference_omnieffects_singleVFX.sh
|
53 |
+
```
|
54 |
+
|
55 |
+
### Multi-VFX
|
56 |
+
```shell
|
57 |
+
sh scripts/inference_omnieffects_multiVFX.sh
|
58 |
+
```
|
59 |
+
|
60 |
+
# 📊 Quantitative Results
|
61 |
+
|
62 |
+
*Omni-Effects* achieves precise spatial control in visual effects generation.
|
63 |
+
|
64 |
+
<p align="center">
|
65 |
+
<img src="quantitative.png" width="100%"/>
|
66 |
+
</p>
|
67 |
+
|
68 |
+
# Acknowledgement
|
69 |
+
We would like to thank the authors of [CogVideoX](https://github.com/zai-org/CogVideo), [EasyControl](https://github.com/Xiaojiu-z/EasyControl) and [VFXCreator](https://huggingface.co/datasets/sophiaa/Open-VFX) for their outstanding work.
|
70 |
+
|
71 |
+
# Citation
|
72 |
+
```
|
73 |
+
@misc{mao2025omnieffects,
|
74 |
+
title={Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation},
|
75 |
+
author={Fangyuan Mao and Aiming Hao and Jintao Chen and Dongxia Liu and Xiaokun Feng and Jiashu Zhu and Meiqi Wu and Chubin Chen and Jiahong Wu and Xiangxiang Chu},
|
76 |
+
year={2025},
|
77 |
+
eprint={2508.07981},
|
78 |
+
archivePrefix={arXiv},
|
79 |
+
primaryClass={cs.CV}
|
80 |
+
}
|
81 |
+
```
|
pipeline.jpg
ADDED
![]() |
Git LFS Details
|
quantitative.png
ADDED
![]() |
Git LFS Details
|
teaser.jpg
ADDED
![]() |
Git LFS Details
|