Model Files
wan2.2_i2v_high_noise_14B_fp16.gguf
: High-noise model in FP16 format (not quantized)wan2.2_i2v_low_noise_14B_fp16.gguf
: Low-noise model in FP16 format (not quantized)wan2.2_t2v_high_noise_14B_fp16.gguf
: High-noise model in FP16 format (not quantized)wan2.2_t2v_low_noise_14B_fp16.gguf
: High-noise model in FP16 format (not quantized)
Format Details
- Important: These are NOT quantized models but FP16 precision models in GGUF container format
- Base model: Wan-AI/Wan2.2-I2V-A14B -Base model: Wan-AI/Wan2.2-T2V-A14B
- Format: GGUF container with FP16 precision (unquantized)
- Original model size: ~27B parameters (14B active per step)
- File sizes:
- high: 28.6 GB for FP16 (SHA256: 3a7d4e...)
- low: 28.6 GB (SHA256: 1b4e28...)
Why FP16 in GGUF?
While GGUF is typically used for quantized models, ComfyUI-GGUF extension supports:
- Loading FP16 models in GGUF container format
- This provides compatibility with ComfyUI workflow
- Downloads last month
- 736
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ussoewwin/WAN2.2_14B_GGUF
Base model
Wan-AI/Wan2.2-I2V-A14B