Thanks!
Our savior ❤
Thank you, bro! "You need to download both a high-noise model and a low-noise model. High noise is used for the first steps and the low-noise for the details" - can I use, for instance, Q8_0 for first steps and Q4 for low noise? Is this possible?
it is possible yes if you want a little bit more speed on those steps
it is possible yes if you want a little bit more speed on those steps
Thank you so very much! I explored the workflow and found out that I can load those models separately one by one, so I can run even Q6 on my 4070 with 12 GB VRAM! That's so handy, thank you!
UPD: 4070 drops the OOM even on Q4, so guys 12GB VRAM is not enough. Also use UnloadModel and Clean VRAM nodes to save some
I'm guessing Wan2.1 LoRAs don't work with this new one...? And if VideoAI gods smile to us and they indeed work, do we need to "add" them to both - high and low noise - models?
EDIT: I did some tries and so far, some loras work with some tweaks. I.e. lightx2v work with a higher strength value, but this may "overweight" 2.2's own weights - and therefore, some of its new (good) stuff.