This is a direct GGUF conversion of Wan-AI/Wan2.1-T2V-14B

All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF.

The model files can be used with the ComfyUI-GGUF custom node.

Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.

The VAE can be downloaded from this repository by Kijai

Please refer to this chart for a basic overview of quantization types.

Downloads last month
0
GGUF
Model size
14.3B params
Architecture
wan

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-to-video models for gguf library.

Model tree for city96/Wan2.1-T2V-14B-gguf

Quantized
(1)
this model

Collection including city96/Wan2.1-T2V-14B-gguf