This is a direct GGUF conversion of https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

The VAE can be downloaded from this repository by Kijai

Please refer to this chart for a basic overview of quantization types.


license: apache-2.0

Downloads last month
156
GGUF
Model size
14.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Njbx/Wan2.1-T2V-14B-CausVid-GGUF

Quantized
(2)
this model