This is a GGUF conversion of an addon of lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill and Wan-AI/Wan2.1-VACE-14B scopes.
The process involved extracting VACE scopes and injecting into the target models, using scripts provided by wsbagnsv1.
All quantized versions were created from the FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.
Usage
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan2.1_T2V_14B_LightX2V_Step_Cfg_Distill_VACE-GGUF | ComfyUI/models/unet |
GGUF (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
All original licenses and restrictions from the base models still apply.
Reference
- For an overview of quantization types, please see the GGUF quantization types.
- Downloads last month
- 377
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support