--- license: apache-2.0 --- > [!IMPORTANT] > ⚠️ **Notice** > This project is intended for **experimental use only**. This is an addon experiment using **VACE scopes** extracted from **[Wan2.1 VACE T2V 14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B)** and injected into the **[Wan2.2 T2V A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)**. Tested with **2-step High Noise + 2-step Low Noise** sampling with the [LightX2V LoRA](https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v), it's working fine in ComfyUI. There's news where VACE team might have a **fix for the [color shifting](https://github.com/ali-vilab/VACE/issues/44) issue** to be released. Will be waiting for the official fix before testing further. --- ## References 🔗 [Wan2.2 MoE](https://www.youtube.com/live/XaW_ZXC0Jv8?t=995) >- **Wan2.2** separates expert models by timestep: > - The **High-Noise expert** focuses on generating overall layout and motion. > - The **Low-Noise expert** refines textures and details. >- The **A14B model** includes both High-Noise and Low-Noise experts, which are activated at different denoising stages. 🔗 [Wan2.2 Workflow Examples](https://docs.comfy.org/tutorials/video/wan/wan2_2#wan2-2-14b-t2v-text-to-video-workflow-example) ---