Wan2.2 T2V A14B VACE FP16 GGUF Models (High & Low Noise)

Important Notice: Experimental Model

This GGUF conversion is based on lym00/Wan2.2_T2V_A14B_VACE-test,
which is explicitly labeled as "intended for experimental use only" by the creator.

While the underlying Wan2.2 model is licensed under Apache 2.0 (permitting commercial use),
this specific configuration has known limitations:

  • Legal Status: The Apache 2.0 license allows commercial use of the generated content
  • Technical Limitations: This is an experimental integration of Wan2.2 T2V A14B with VACE scopes
  • Known Issue: Color shifting problems may occur (as documented in the original model)
  • Stability: Not recommended for production environments without thorough testing

Model Files

  • Wan2.2_T2V_High_Noise_14B_VACE_fp16.gguf β€” High-noise model (used for initial denoising steps)
  • wan2.2_t2v_low_noise_14B_fp16.gguf β€” Low-noise model (used for detail refinement)

Requirements

Installation

  1. Download both GGUF files and place them in ComfyUI/models/unet/
  2. Install ComfyUI-GGUF extension
  3. Restart ComfyUI

Usage

  1. Load the workflow file included in this repository (drag and drop into ComfyUI)
  2. The workflow will automatically use:
    • High-noise model for initial denoising steps (first 2–4 steps)
    • Low-noise model for final detail refinement (remaining steps)

Format Details

Important: These are NOT quantized models but FP16 precision models in GGUF container format.

  • Base model: lym00/Wan2.2_T2V_A14B_VACE-test
  • Original model: Combination of Wan2.2 T2V A14B and VACE scopes
  • Format: GGUF container with FP16 precision (unquantized)
  • Model size: ~27B parameters (14B active per step)
  • File sizes:
    • High: 34.7 GB
    • Low: 34.7 GB

Why FP16 in GGUF?

While GGUF is typically used for quantized models, ComfyUI-GGUF supports:

  • Loading FP16 models in GGUF format
  • Full compatibility with ComfyUI workflows
  • Twice the file size of quantized models, but maximum quality

MoE Architecture Explained

Wan2.2 uses a Mixture-of-Experts (MoE) architecture:

  • High-noise expert: Used for early denoising, focuses on layout and motion
  • Low-noise expert: Used later for refining textures and details
  • Transition point determined by signal-to-noise ratio (SNR)

VACE Integration

This model incorporates VACE (Video Aesthetic Control Embedding):

  • Enhances cinematic-level aesthetics
  • Allows fine control over lighting, composition, contrast, and color tone
  • Enables more controllable cinematic style generation

Known Limitations & Commercial Use Guidance

  1. Color Shifting Issue:

    • Same issue as in the original lym00 model
    • VACE team is reportedly working on a fix (Banodoco Discord)
    • Avoid for applications requiring color accuracy
  2. Experimental Status:

    • Some features may not work as expected
    • Output quality can vary
  3. Commercial Use Recommendations:

    • Allowed under Apache 2.0
    • Test thoroughly before commercial deployment
    • Consider the official Wan-AI/Wan2.2-T2V-A14B for production
  4. Legal Disclaimer:

    • You are fully responsible for compliance with laws and ethical use

Original Model Information

  • Wan2.2 T2V A14B β€” Text-to-Video MoE model supporting 480p & 720p
  • VACE β€” Video Aesthetic Control Embedding from Wan2.1

Features:

  • Effective MoE separation of denoising steps
  • Cinematic-level control over visuals
  • High-definition motion generation at 720p@24fps on consumer GPUs

License Agreement

Same Apache 2.0 terms as the original model.
Commercial use is allowed, but stability issues mean testing is strongly advised.

Acknowledgements

Contact

For issues, open an issue on GitHub

Downloads last month
80
GGUF
Model size
17.3B params
Architecture
wan2.2_vace
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ussoewwin/Wan2.2_T2V_A14B_VACE-test_fp16_GGUF

Quantized
(1)
this model