________               ______    ____         ___ ___ _______ ______ _______       ____      ______ ______        _______ _______ _______ _______ 
|  |  |  |.---.-.-----.|__    |  |_   | ______|   |   |   _   |      |    ___|_____|_   |    |__    |   __ \______|     __|     __|   |   |    ___|
|  |  |  ||  _  |     ||    __|__ _|  ||______|   |   |       |   ---|    ___|______||  |_ __|__    |   __ <______|    |  |    |  |   |   |    ___|
|________||___._|__|__||______|__|______|      \_____/|___|___|______|_______|     |______|__|______|______/      |_______|_______|_______|___|                                                                                                                                                
                                                                                                                                                                                                                                                                                           

Wan-2.1-VACE-1.3B-GGUF

Direct GGUF Conversion of Wan2.1-VACE-1.3B

Wan2.1 is an open-source suite of video foundation models, compatible with consumer-grade GPUs, that excels in various video generation tasks like text-to-video, image-to-video, and video editing, even supporting visual text generation.

Table of Contents πŸ“

  1. β–Ά Usage
  2. πŸ“ƒ License
  3. πŸ™ Acknowledgements

β–Ά Usage

Download models using huggingface-cli:

pip install "huggingface_hub[cli]"
huggingface-cli download samuelchristlie/Wan2.1-VACE-1.3B-GGUF --local-dir ./Wan2.1-VACE-1.3B-GGUF

You can also download directly from this page.

πŸ“ƒ License

This model is a derivative work of the original model licensed under the Apache 2.0 License, and is therefore distributed under the terms of the same license.

πŸ™ Acknowledgements

Thanks to Patrick Gillespie for creating the ASCII text art tool used in this project https://patorjk.com/software/taag/

Wan-AI for the Wan model https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B

https://huggingface.co/city96

Downloads last month
28
GGUF
Model size
2.15B params
Architecture
wan
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for newgenai79/Wan2.1-VACE-1.3B-GGUF

Quantized
(2)
this model