This GGUF file is a direct conversion of Wan-AI/Wan2.2-I2V-A14B

Since this is a quantized model, all original licensing terms and usage restrictions remain in effect.

Usage

The model can be used with the ComfyUI custom node ComfyUI-GGUF by city96

Place model files in ComfyUI/models/unet see the GitHub readme for further installation instructions.

Downloads last month
38,398
GGUF
Model size
14.3B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantStack/Wan2.2-I2V-A14B-GGUF

Quantized
(3)
this model

Collection including QuantStack/Wan2.2-I2V-A14B-GGUF