--- license: apache-2.0 language: - en base_model: - Comfy-Org/Wan_2.1_ComfyUI_repackaged pipeline_tag: text-to-video tags: - gguf-node widget: - text: >- a pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00007_.webp - text: >- a pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00009_.webp - text: >- a pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00003_.webp - text: >- a pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00004_.webp - text: >- a pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00005_.webp - text: >- a pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00006_.webp - text: >- a fox moving quickly in a beautiful winter scenery nature trees sunset tracking camera parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00002_.webp - text: >- a cute anime girl with massive fennec ears and a big fluffy tail wearing a maid outfit turning around parameters: negative_prompt: >- blurry ugly bad output: url: samples\ComfyUI_00008_.webp - text: >- glass flower blossom output: url: samples\ComfyUI_00010_.webp --- # **gguf quantized version of wan video** - drag **gguf** to > `./ComfyUI/models/diffusion_models` - drag **t5xxl-um** to > `./ComfyUI/models/text_encoders` - drag **vae** to > `./ComfyUI/models/vae` ![screenshot](https://raw.githubusercontent.com/calcuis/comfy/master/wan-t2v.gif) ## **workflow** - for i2v model, drag **clip-vision-h** to > `./ComfyUI/models/clip_vision` - run the .bat file in the main directory (assume you are using gguf pack below) - if you opt to use [**fp8 scaled umt5xxl**](https://huggingface.co/calcuis/wan-gguf/blob/main/t5xxl_um_fp8_e4m3fn_scaled.safetensors) encoder (if applies to any fp8 scale t5 actually), please use cpu offload (switch from default to **cpu** under **device** in **gguf clip loader**; won't affect speed); btw, it works fine for both [**gguf umt5xxl**](https://huggingface.co/calcuis/wan-1.3b-gguf/blob/main/umt5-xxl-encoder-q4_k_m.gguf) and [**gguf vae**](https://huggingface.co/calcuis/wan-1.3b-gguf/blob/main/pig_wan_vae_fp32-f16.gguf) - drag any demo video (below) to > your browser for workflow ## **review** - `pig` is a lazy architecture for gguf node; it applies to all model, encoder and vae gguf file(s); if you try to run it in comfyui-gguf node, you might need to manually add `pig` in it's IMG_ARCH_LIST (under loader.py); easier than you edit the gguf file itself; btw, model architecture which compatible with comfyui-gguf, including `wan`, should work in gguf node - 1.3b t2v **gguf** is working fine; good for old or low end machine ### **reference** - base model from [wan-ai](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) - comfyui from [comfyanonymous](https://github.com/comfyanonymous/ComfyUI) - pig architecture from [connector](https://huggingface.co/connector) - gguf-node ([pypi](https://pypi.org/project/gguf-node)|[repo](https://github.com/calcuis/gguf)|[pack](https://github.com/calcuis/gguf/releases))