metadata
base_model:
- QuantStack/Wan2.1_T2V_14B_FusionX_VACE
base_model_relation: quantized
library_name: gguf
quantized_by: lym00
tags:
- text-to-video
- image-to-video
- video-to-video
- quantized
language:
- en
license: apache-2.0
This is a GGUF conversion of QuantStack/Wan2.1_T2V_14B_FusionX_VACE.
All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.
Usage
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
Type | Name | Location | Download |
---|---|---|---|
Main Model | Wan2.1_T2V_14B_FusionX_VACE-GGUF | ComfyUI/models/unet |
GGUF (this repo) |
Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
All original licenses and restrictions from the base models still apply.
Reference
- For an overview of quantization types, please see the GGUF quantization types.