This is a GGUF conversion of an addon of lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill and Wan-AI/Wan2.1-VACE-14B scopes.

The process involved extracting VACE scopes and injecting into the target models, using scripts provided by wsbagnsv1.

All quantized versions were created from the FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.

Usage

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model Wan2.1_T2V_14B_LightX2V_Step_Cfg_Distill_VACE-GGUF ComfyUI/models/unet GGUF (this repo)
Text Encoder umt5-xxl-encoder ComfyUI/models/text_encoders Safetensors / GGUF
VAE Wan2_1_VAE_bf16 ComfyUI/models/vae Safetensors

ComfyUI example workflow

Notes

All original licenses and restrictions from the base models still apply.

Reference

Downloads last month
377
GGUF
Model size
17.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lym00/Wan2.1_T2V_14B_LightX2V_StepCfgDistill_VACE-GGUF

Quantized
(1)
this model