How to install rapid-wan-22-i2v-gguf?
Sorry for the noob question, but I downloaded https://civitai.com/models/1855105/rapid-wan-22-i2v-gguf, but don't know how to include it in the workflow specified in the screenshots on the Model card.
Can anyone help, please? Thanks.
Take a look at the JSON workflow provided on that page @ https://civitai.com/models/1855105/rapid-wan-22-i2v-gguf
You'll need to grab the other files stripped from that GGUF model though, like umt5xxl and the WAN 2.1 VAE. However, you might be able to pull those from my AIO model and use the GGUF model loader for "model".
Got it. Thanks! :)
I downloaded the GGUF, got the workflow, and installed the necessary nodes (I think), but I can't figure out where to put Rapid AIO, as well as where to get the stuff for the other nodes that are in the workflow. :(
gguf model to folder comfyui/models/unet
umt5_xxl_fp8_e4m3fn_scaled to comfyui/models/text_encoders
wan_2.1_vae to comfyui/models/vae
clip_vision_h to comfyui/models/clip_vision
Hello Phr00t. About gguf, can you give us new t2v versions? The other guy from civitai only gave us version 1, while you've already made ten.
gguf model to folder comfyui/models/unet
umt5_xxl_fp8_e4m3fn_scaled to comfyui/models/text_encoders
wan_2.1_vae to comfyui/models/vae
clip_vision_h to comfyui/models/clip_vision
What about Rapid AIO? Where do I add that in the workflow provided in civitai?
I'm sorry, but I'm really new to all this, so if you don't mind, please ELI5
gguf model to folder comfyui/models/unet
umt5_xxl_fp8_e4m3fn_scaled to comfyui/models/text_encoders
wan_2.1_vae to comfyui/models/vae
clip_vision_h to comfyui/models/clip_visionWhat about Rapid AIO? Where do I add that in the workflow provided in civitai?
I'm sorry, but I'm really new to all this, so if you don't mind, please ELI5
The "gguf model" you got from CivitAI is a slimmed down version of my Rapid AIO. So, if you are using that GGUF model, you don't need anything from my repository in the workflow. I provide the "FP8" version of it, which is kinda sorta slimmed down, but not as much as GGUF versions you get elsewhere.
gguf model to folder comfyui/models/unet
umt5_xxl_fp8_e4m3fn_scaled to comfyui/models/text_encoders
wan_2.1_vae to comfyui/models/vae
clip_vision_h to comfyui/models/clip_visionWhat about Rapid AIO? Where do I add that in the workflow provided in civitai?
I'm sorry, but I'm really new to all this, so if you don't mind, please ELI5
The "gguf model" you got from CivitAI is a slimmed down version of my Rapid AIO. So, if you are using that GGUF model, you don't need anything from my repository in the workflow. I provide the "FP8" version of it, which is kinda sorta slimmed down, but not as much as GGUF versions you get elsewhere.
Ah I see. Thank you so much for clarifying! :)
What about the new GGUF T2V models? The original Rapid is too heavy. I have fewer problems with the GGUF version.