This is a GGUF conversion of Wan14Bi2vFusioniX_fp16.safetensors by @vrgamedevgirl84.

All quantized versions were created from the base I2V FP16 model using the conversion scripts provided by city96, available at the ComfyUI-GGUF GitHub repository.

Usage

The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:

Type Name Location Download
Main Model Wan2.1_I2V_14B_FusionX-GGUF ComfyUI/models/unet GGUF (this repo)
Text Encoder umt5-xxl-encoder ComfyUI/models/text_encoders Safetensors / GGUF
VAE Wan2_1_VAE_bf16 ComfyUI/models/vae Safetensors

ComfyUI example workflow

Notes

All original licenses and restrictions from the base models still apply.

Reference

Downloads last month
13,527
GGUF
Model size
16.4B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantStack/Wan2.1_I2V_14B_FusionX-GGUF

Quantized
(3)
this model
Finetunes
1 model

Collection including QuantStack/Wan2.1_I2V_14B_FusionX-GGUF