Native speedup ('turbo') lora?
Hi @lodestones , first of all, thank you for such a great model, this should be definitely a new standard for 'flux+danbooru' capability.
Do you have any plans on providing 'native' speedup Loras? I've tried different ones for Flux, but they tend to change style and/or other aspects too much. Like '8 step lora', '12 step lora' and so on?
I've got 4070 12GB, and with added CFG it is already twice as slow compared to Flux... Not that it is unexpected, but I'm open to any suggestions how I can speed it up.
Thanks again!
There are a couple here, I haven't tested them yet though
https://huggingface.co/silveroxides/Chroma-LoRA-Experiments
Thanks, already tried these before, I think I managed to make a combo which produces nice photorealistic results with just 20 steps (using Detail Daemon to add details, with dpmpp_2m / sgm uniform).
It's exciting to compare prev / latest versions of this model to witness it converging better and better with every next version, though it still often produces anatomical errors (which often can be fixed with negative prompt adjustments)
Okay, managed to install triton + sage attention under Windows, now speed is fine